Test Report: Hyperkit_macOS 19370

                    
                      dd51e72d60a15da3a1a4a8c267729efa6313a896:2024-08-06:35671
                    
                

Test fail (24/222)

x
+
TestOffline (195.15s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-733000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-733000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.757577028s)

                                                
                                                
-- stdout --
	* [offline-docker-733000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-733000" primary control-plane node in "offline-docker-733000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-733000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:09:59.847736    6489 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:09:59.847932    6489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:09:59.847938    6489 out.go:304] Setting ErrFile to fd 2...
	I0806 01:09:59.847941    6489 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:09:59.848107    6489 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 01:09:59.849856    6489 out.go:298] Setting JSON to false
	I0806 01:09:59.876513    6489 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4161,"bootTime":1722927638,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 01:09:59.876605    6489 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:09:59.939351    6489 out.go:177] * [offline-docker-733000] minikube v1.33.1 on Darwin 14.5
	I0806 01:09:59.983605    6489 notify.go:220] Checking for updates...
	I0806 01:10:00.008274    6489 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:10:00.074646    6489 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:10:00.095375    6489 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 01:10:00.116538    6489 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:10:00.137509    6489 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:10:00.158409    6489 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:10:00.179729    6489 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:10:00.207530    6489 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 01:10:00.249805    6489 start.go:297] selected driver: hyperkit
	I0806 01:10:00.249832    6489 start.go:901] validating driver "hyperkit" against <nil>
	I0806 01:10:00.249852    6489 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:10:00.254251    6489 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:10:00.254396    6489 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 01:10:00.262712    6489 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 01:10:00.266390    6489 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:10:00.266412    6489 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 01:10:00.266450    6489 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:10:00.266670    6489 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:10:00.266699    6489 cni.go:84] Creating CNI manager for ""
	I0806 01:10:00.266715    6489 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:10:00.266721    6489 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:10:00.266777    6489 start.go:340] cluster config:
	{Name:offline-docker-733000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-733000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:10:00.266865    6489 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:10:00.314772    6489 out.go:177] * Starting "offline-docker-733000" primary control-plane node in "offline-docker-733000" cluster
	I0806 01:10:00.356350    6489 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:10:00.356385    6489 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 01:10:00.356400    6489 cache.go:56] Caching tarball of preloaded images
	I0806 01:10:00.356592    6489 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 01:10:00.356612    6489 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:10:00.356878    6489 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/offline-docker-733000/config.json ...
	I0806 01:10:00.356897    6489 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/offline-docker-733000/config.json: {Name:mka91d46736e4c8ca936a9816586fb6494ca9227 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:10:00.357249    6489 start.go:360] acquireMachinesLock for offline-docker-733000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:10:00.357306    6489 start.go:364] duration metric: took 42.053µs to acquireMachinesLock for "offline-docker-733000"
	I0806 01:10:00.357332    6489 start.go:93] Provisioning new machine with config: &{Name:offline-docker-733000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-733000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:10:00.357393    6489 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:10:00.378457    6489 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:10:00.378669    6489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:10:00.378717    6489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:10:00.387359    6489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53666
	I0806 01:10:00.387699    6489 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:10:00.388123    6489 main.go:141] libmachine: Using API Version  1
	I0806 01:10:00.388135    6489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:10:00.388340    6489 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:10:00.388449    6489 main.go:141] libmachine: (offline-docker-733000) Calling .GetMachineName
	I0806 01:10:00.388526    6489 main.go:141] libmachine: (offline-docker-733000) Calling .DriverName
	I0806 01:10:00.388630    6489 start.go:159] libmachine.API.Create for "offline-docker-733000" (driver="hyperkit")
	I0806 01:10:00.388659    6489 client.go:168] LocalClient.Create starting
	I0806 01:10:00.388702    6489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:10:00.388754    6489 main.go:141] libmachine: Decoding PEM data...
	I0806 01:10:00.388768    6489 main.go:141] libmachine: Parsing certificate...
	I0806 01:10:00.388842    6489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:10:00.388881    6489 main.go:141] libmachine: Decoding PEM data...
	I0806 01:10:00.388893    6489 main.go:141] libmachine: Parsing certificate...
	I0806 01:10:00.388913    6489 main.go:141] libmachine: Running pre-create checks...
	I0806 01:10:00.388922    6489 main.go:141] libmachine: (offline-docker-733000) Calling .PreCreateCheck
	I0806 01:10:00.389012    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:00.389208    6489 main.go:141] libmachine: (offline-docker-733000) Calling .GetConfigRaw
	I0806 01:10:00.400336    6489 main.go:141] libmachine: Creating machine...
	I0806 01:10:00.400360    6489 main.go:141] libmachine: (offline-docker-733000) Calling .Create
	I0806 01:10:00.400585    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:00.400850    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:10:00.400584    6510 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:10:00.400997    6489 main.go:141] libmachine: (offline-docker-733000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:10:00.862280    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:10:00.862217    6510 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/id_rsa...
	I0806 01:10:00.964200    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:10:00.964146    6510 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/offline-docker-733000.rawdisk...
	I0806 01:10:00.964229    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Writing magic tar header
	I0806 01:10:00.964251    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Writing SSH key tar header
	I0806 01:10:00.964627    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:10:00.964583    6510 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000 ...
	I0806 01:10:01.436196    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:01.436240    6489 main.go:141] libmachine: (offline-docker-733000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/hyperkit.pid
	I0806 01:10:01.436350    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Using UUID b37960fa-701d-4a77-8604-60ef118699f4
	I0806 01:10:01.598407    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Generated MAC 46:cd:df:ac:b:5a
	I0806 01:10:01.598427    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-733000
	I0806 01:10:01.598494    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b37960fa-701d-4a77-8604-60ef118699f4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0806 01:10:01.598540    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b37960fa-701d-4a77-8604-60ef118699f4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001b0630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0806 01:10:01.598651    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b37960fa-701d-4a77-8604-60ef118699f4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/offline-docker-733000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage,/Users
/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-733000"}
	I0806 01:10:01.598719    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b37960fa-701d-4a77-8604-60ef118699f4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/offline-docker-733000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/off
line-docker-733000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-733000"
	I0806 01:10:01.598733    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:10:01.601809    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 DEBUG: hyperkit: Pid is 6535
	I0806 01:10:01.602235    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 0
	I0806 01:10:01.602252    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:01.602326    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:01.603231    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:01.603305    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:01.603323    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:01.603354    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:01.603387    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:01.603405    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:01.603419    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:01.603435    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:01.603450    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:01.603466    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:01.603487    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:01.603511    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:01.603541    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:01.603554    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:01.603569    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:01.603584    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:01.603606    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:01.603619    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:01.603629    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:01.609940    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:10:01.740514    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:10:01.741131    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:10:01.741151    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:10:01.741159    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:10:01.741168    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:10:02.117534    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:10:02.117554    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:10:02.232457    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:10:02.232485    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:10:02.232495    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:10:02.232505    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:10:02.233315    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:10:02.233326    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:10:03.603581    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 1
	I0806 01:10:03.603593    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:03.603688    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:03.604465    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:03.604531    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:03.604545    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:03.604555    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:03.604565    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:03.604587    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:03.604602    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:03.604609    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:03.604618    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:03.604625    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:03.604633    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:03.604645    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:03.604654    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:03.604661    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:03.604669    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:03.604678    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:03.604686    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:03.604698    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:03.604724    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:05.606668    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 2
	I0806 01:10:05.606685    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:05.606750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:05.607505    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:05.607562    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:05.607571    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:05.607579    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:05.607586    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:05.607604    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:05.607617    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:05.607625    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:05.607631    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:05.607640    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:05.607649    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:05.607657    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:05.607664    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:05.607671    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:05.607689    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:05.607697    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:05.607707    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:05.607715    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:05.607724    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:07.608732    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 3
	I0806 01:10:07.608753    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:07.608845    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:07.609712    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:07.609786    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:07.609794    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:07.609803    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:07.609812    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:07.609819    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:07.609825    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:07.609845    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:07.609851    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:07.609858    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:07.609863    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:07.609871    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:07.609877    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:07.609884    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:07.609891    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:07.609898    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:07.609904    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:07.609910    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:07.609925    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:07.631385    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0806 01:10:07.631526    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0806 01:10:07.631535    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0806 01:10:07.651504    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:10:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0806 01:10:09.611106    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 4
	I0806 01:10:09.611123    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:09.611228    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:09.612036    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:09.612130    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:09.612141    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:09.612151    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:09.612159    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:09.612167    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:09.612179    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:09.612185    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:09.612212    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:09.612225    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:09.612237    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:09.612246    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:09.612263    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:09.612271    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:09.612281    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:09.612289    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:09.612297    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:09.612303    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:09.612311    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:11.613828    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 5
	I0806 01:10:11.613866    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:11.613896    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:11.614661    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:11.614704    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:11.614718    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:11.614730    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:11.614740    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:11.614750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:11.614765    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:11.614777    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:11.614783    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:11.614804    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:11.614818    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:11.614827    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:11.614834    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:11.614843    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:11.614852    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:11.614865    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:11.614873    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:11.614885    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:11.614896    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:13.616699    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 6
	I0806 01:10:13.616722    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:13.616773    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:13.617570    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:13.617621    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:13.617637    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:13.617651    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:13.617661    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:13.617668    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:13.617674    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:13.617691    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:13.617725    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:13.617735    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:13.617743    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:13.617751    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:13.617759    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:13.617767    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:13.617776    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:13.617783    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:13.617789    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:13.617802    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:13.617810    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:15.617847    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 7
	I0806 01:10:15.617861    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:15.617914    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:15.618687    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:15.618740    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:15.618753    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:15.618773    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:15.618780    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:15.618787    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:15.618792    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:15.618805    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:15.618814    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:15.618820    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:15.618827    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:15.618833    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:15.618841    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:15.618850    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:15.618857    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:15.618865    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:15.618880    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:15.618899    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:15.618910    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:17.619323    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 8
	I0806 01:10:17.619336    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:17.619422    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:17.620204    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:17.620260    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:17.620270    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:17.620280    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:17.620286    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:17.620293    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:17.620299    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:17.620305    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:17.620310    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:17.620323    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:17.620333    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:17.620340    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:17.620348    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:17.620363    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:17.620374    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:17.620390    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:17.620403    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:17.620416    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:17.620423    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:19.621201    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 9
	I0806 01:10:19.621220    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:19.621338    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:19.622119    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:19.622168    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:19.622181    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:19.622194    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:19.622210    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:19.622217    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:19.622223    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:19.622240    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:19.622253    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:19.622262    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:19.622271    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:19.622278    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:19.622284    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:19.622294    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:19.622306    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:19.622317    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:19.622326    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:19.622332    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:19.622340    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:21.624344    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 10
	I0806 01:10:21.624360    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:21.624427    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:21.625196    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:21.625267    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:21.625280    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:21.625289    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:21.625295    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:21.625302    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:21.625308    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:21.625318    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:21.625326    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:21.625333    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:21.625338    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:21.625350    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:21.625362    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:21.625378    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:21.625391    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:21.625402    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:21.625410    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:21.625417    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:21.625425    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:23.625460    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 11
	I0806 01:10:23.625489    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:23.625577    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:23.626342    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:23.626396    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:23.626405    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:23.626414    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:23.626423    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:23.626430    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:23.626437    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:23.626443    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:23.626453    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:23.626462    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:23.626483    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:23.626497    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:23.626507    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:23.626515    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:23.626522    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:23.626529    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:23.626535    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:23.626542    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:23.626551    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:25.627095    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 12
	I0806 01:10:25.627108    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:25.627136    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:25.627935    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:25.627985    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:25.627999    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:25.628015    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:25.628024    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:25.628039    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:25.628051    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:25.628061    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:25.628069    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:25.628077    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:25.628094    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:25.628102    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:25.628113    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:25.628123    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:25.628143    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:25.628154    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:25.628176    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:25.628189    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:25.628198    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:27.628455    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 13
	I0806 01:10:27.628467    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:27.628577    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:27.629328    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:27.629406    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:27.629421    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:27.629432    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:27.629442    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:27.629460    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:27.629471    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:27.629492    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:27.629505    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:27.629515    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:27.629523    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:27.629530    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:27.629537    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:27.629543    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:27.629550    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:27.629557    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:27.629565    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:27.629571    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:27.629579    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:29.631213    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 14
	I0806 01:10:29.631227    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:29.631318    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:29.632208    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:29.632254    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:29.632265    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:29.632283    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:29.632289    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:29.632297    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:29.632306    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:29.632326    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:29.632342    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:29.632354    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:29.632367    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:29.632381    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:29.632399    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:29.632419    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:29.632428    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:29.632442    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:29.632449    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:29.632455    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:29.632465    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:31.633424    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 15
	I0806 01:10:31.633436    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:31.633546    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:31.634326    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:31.634385    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:31.634399    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:31.634413    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:31.634427    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:31.634437    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:31.634452    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:31.634466    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:31.634475    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:31.634500    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:31.634518    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:31.634533    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:31.634547    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:31.634555    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:31.634574    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:31.634587    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:31.634597    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:31.634606    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:31.634614    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:33.636415    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 16
	I0806 01:10:33.636428    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:33.636511    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:33.637315    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:33.637346    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:33.637355    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:33.637383    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:33.637391    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:33.637398    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:33.637405    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:33.637416    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:33.637430    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:33.637439    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:33.637447    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:33.637453    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:33.637460    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:33.637468    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:33.637473    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:33.637487    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:33.637499    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:33.637508    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:33.637516    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:35.638142    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 17
	I0806 01:10:35.638154    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:35.638207    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:35.639080    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:35.639119    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:35.639132    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:35.639142    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:35.639162    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:35.639169    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:35.639183    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:35.639191    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:35.639199    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:35.639206    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:35.639215    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:35.639222    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:35.639229    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:35.639238    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:35.639246    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:35.639253    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:35.639269    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:35.639277    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:35.639283    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:37.641344    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 18
	I0806 01:10:37.641360    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:37.641413    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:37.642210    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:37.642265    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:37.642276    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:37.642308    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:37.642324    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:37.642333    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:37.642339    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:37.642347    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:37.642354    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:37.642361    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:37.642371    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:37.642387    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:37.642400    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:37.642412    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:37.642419    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:37.642425    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:37.642432    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:37.642439    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:37.642448    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:39.643122    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 19
	I0806 01:10:39.643137    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:39.643260    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:39.644161    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:39.644173    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:39.644182    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:39.644197    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:39.644203    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:39.644211    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:39.644216    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:39.644239    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:39.644252    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:39.644266    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:39.644272    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:39.644278    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:39.644286    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:39.644296    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:39.644305    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:39.644312    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:39.644321    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:39.644327    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:39.644335    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:41.644719    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 20
	I0806 01:10:41.644735    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:41.644774    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:41.645591    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:41.645600    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:41.645622    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:41.645631    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:41.645638    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:41.645644    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:41.645651    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:41.645661    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:41.645668    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:41.645692    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:41.645711    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:41.645719    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:41.645727    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:41.645735    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:41.645743    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:41.645750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:41.645766    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:41.645778    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:41.645788    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:43.647623    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 21
	I0806 01:10:43.647645    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:43.647743    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:43.648799    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:43.648849    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:43.648873    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:43.648884    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:43.648890    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:43.648898    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:43.648906    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:43.648913    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:43.648920    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:43.648927    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:43.648941    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:43.648954    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:43.648962    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:43.648970    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:43.648978    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:43.648985    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:43.648992    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:43.649010    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:43.649022    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:45.651024    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 22
	I0806 01:10:45.651039    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:45.651150    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:45.652064    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:45.652106    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:45.652114    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:45.652124    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:45.652131    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:45.652146    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:45.652161    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:45.652169    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:45.652177    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:45.652186    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:45.652199    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:45.652206    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:45.652214    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:45.652234    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:45.652245    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:45.652254    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:45.652263    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:45.652270    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:45.652278    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:47.652613    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 23
	I0806 01:10:47.652630    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:47.652773    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:47.653708    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:47.653717    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:47.653726    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:47.653732    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:47.653739    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:47.653744    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:47.653779    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:47.653791    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:47.653828    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:47.653841    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:47.653849    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:47.653860    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:47.653869    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:47.653876    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:47.653884    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:47.653894    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:47.653902    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:47.653908    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:47.653914    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:49.654110    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 24
	I0806 01:10:49.654128    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:49.654230    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:49.655021    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:49.655054    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:49.655064    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:49.655071    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:49.655077    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:49.655086    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:49.655093    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:49.655100    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:49.655108    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:49.655115    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:49.655122    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:49.655130    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:49.655137    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:49.655144    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:49.655151    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:49.655158    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:49.655166    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:49.655173    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:49.655180    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:51.657256    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 25
	I0806 01:10:51.657271    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:51.657374    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:51.658321    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:51.658354    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:51.658361    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:51.658371    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:51.658389    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:51.658396    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:51.658403    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:51.658412    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:51.658418    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:51.658424    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:51.658448    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:51.658464    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:51.658482    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:51.658496    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:51.658509    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:51.658516    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:51.658525    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:51.658539    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:51.658551    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:53.660603    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 26
	I0806 01:10:53.660616    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:53.660750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:53.661591    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:53.661643    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:53.661653    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:53.661680    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:53.661693    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:53.661709    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:53.661719    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:53.661726    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:53.661741    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:53.661747    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:53.661754    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:53.661761    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:53.661767    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:53.661774    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:53.661783    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:53.661791    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:53.661797    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:53.661804    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:53.661812    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:55.663833    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 27
	I0806 01:10:55.663845    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:55.663918    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:55.664751    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:55.664797    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:55.664810    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:55.664835    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:55.664848    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:55.664859    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:55.664868    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:55.664880    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:55.664887    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:55.664893    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:55.664905    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:55.664920    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:55.664931    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:55.664940    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:55.664946    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:55.664953    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:55.664960    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:55.664968    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:55.664977    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:57.666054    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 28
	I0806 01:10:57.666067    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:57.666141    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:57.666911    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:57.666975    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:57.666986    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:57.666995    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:57.667002    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:57.667008    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:57.667014    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:57.667029    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:57.667055    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:57.667062    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:57.667083    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:57.667095    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:57.667105    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:57.667114    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:57.667131    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:57.667140    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:57.667156    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:57.667167    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:57.667179    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:10:59.667710    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 29
	I0806 01:10:59.667723    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:10:59.667769    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:10:59.668538    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for 46:cd:df:ac:b:5a in /var/db/dhcpd_leases ...
	I0806 01:10:59.668596    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:10:59.668608    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:10:59.668617    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:10:59.668623    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:10:59.668630    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:10:59.668637    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:10:59.668643    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:10:59.668650    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:10:59.668656    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:10:59.668666    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:10:59.668673    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:10:59.668681    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:10:59.668689    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:10:59.668712    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:10:59.668718    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:10:59.668725    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:10:59.668733    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:10:59.668742    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:01.670846    6489 client.go:171] duration metric: took 1m1.2811081s to LocalClient.Create
	I0806 01:11:03.672352    6489 start.go:128] duration metric: took 1m3.313841015s to createHost
	I0806 01:11:03.672421    6489 start.go:83] releasing machines lock for "offline-docker-733000", held for 1m3.313960251s
	W0806 01:11:03.672442    6489 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:cd:df:ac:b:5a
	I0806 01:11:03.672763    6489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:11:03.672799    6489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:11:03.682059    6489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53702
	I0806 01:11:03.682470    6489 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:11:03.682929    6489 main.go:141] libmachine: Using API Version  1
	I0806 01:11:03.682944    6489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:11:03.683196    6489 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:11:03.683651    6489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:11:03.683697    6489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:11:03.692486    6489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53704
	I0806 01:11:03.692953    6489 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:11:03.693417    6489 main.go:141] libmachine: Using API Version  1
	I0806 01:11:03.693433    6489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:11:03.693818    6489 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:11:03.693999    6489 main.go:141] libmachine: (offline-docker-733000) Calling .GetState
	I0806 01:11:03.694107    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:03.694178    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:03.695186    6489 main.go:141] libmachine: (offline-docker-733000) Calling .DriverName
	I0806 01:11:03.756687    6489 out.go:177] * Deleting "offline-docker-733000" in hyperkit ...
	I0806 01:11:03.777655    6489 main.go:141] libmachine: (offline-docker-733000) Calling .Remove
	I0806 01:11:03.777789    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:03.777799    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:03.777866    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:03.778801    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:03.778869    6489 main.go:141] libmachine: (offline-docker-733000) DBG | waiting for graceful shutdown
	I0806 01:11:04.779316    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:04.779392    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:04.780306    6489 main.go:141] libmachine: (offline-docker-733000) DBG | waiting for graceful shutdown
	I0806 01:11:05.780867    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:05.781021    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:05.782731    6489 main.go:141] libmachine: (offline-docker-733000) DBG | waiting for graceful shutdown
	I0806 01:11:06.784199    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:06.784321    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:06.785209    6489 main.go:141] libmachine: (offline-docker-733000) DBG | waiting for graceful shutdown
	I0806 01:11:07.785964    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:07.786032    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:07.786642    6489 main.go:141] libmachine: (offline-docker-733000) DBG | waiting for graceful shutdown
	I0806 01:11:08.788562    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:08.788629    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6535
	I0806 01:11:08.789679    6489 main.go:141] libmachine: (offline-docker-733000) DBG | sending sigkill
	I0806 01:11:08.789689    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0806 01:11:08.803016    6489 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:cd:df:ac:b:5a
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:cd:df:ac:b:5a
	I0806 01:11:08.803036    6489 start.go:729] Will try again in 5 seconds ...
	I0806 01:11:08.812466    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:11:08 WARN : hyperkit: failed to read stderr: EOF
	I0806 01:11:08.812496    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:11:08 WARN : hyperkit: failed to read stdout: EOF
	I0806 01:11:13.804772    6489 start.go:360] acquireMachinesLock for offline-docker-733000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:12:06.517156    6489 start.go:364] duration metric: took 52.711438668s to acquireMachinesLock for "offline-docker-733000"
	I0806 01:12:06.517231    6489 start.go:93] Provisioning new machine with config: &{Name:offline-docker-733000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-733000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:12:06.517291    6489 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:12:06.538852    6489 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:12:06.538920    6489 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:12:06.538945    6489 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:12:06.547434    6489 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53713
	I0806 01:12:06.547820    6489 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:12:06.548192    6489 main.go:141] libmachine: Using API Version  1
	I0806 01:12:06.548214    6489 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:12:06.548402    6489 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:12:06.548518    6489 main.go:141] libmachine: (offline-docker-733000) Calling .GetMachineName
	I0806 01:12:06.548612    6489 main.go:141] libmachine: (offline-docker-733000) Calling .DriverName
	I0806 01:12:06.548732    6489 start.go:159] libmachine.API.Create for "offline-docker-733000" (driver="hyperkit")
	I0806 01:12:06.548749    6489 client.go:168] LocalClient.Create starting
	I0806 01:12:06.548776    6489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:12:06.548831    6489 main.go:141] libmachine: Decoding PEM data...
	I0806 01:12:06.548840    6489 main.go:141] libmachine: Parsing certificate...
	I0806 01:12:06.548882    6489 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:12:06.548927    6489 main.go:141] libmachine: Decoding PEM data...
	I0806 01:12:06.548937    6489 main.go:141] libmachine: Parsing certificate...
	I0806 01:12:06.548950    6489 main.go:141] libmachine: Running pre-create checks...
	I0806 01:12:06.548955    6489 main.go:141] libmachine: (offline-docker-733000) Calling .PreCreateCheck
	I0806 01:12:06.549054    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:06.549099    6489 main.go:141] libmachine: (offline-docker-733000) Calling .GetConfigRaw
	I0806 01:12:06.601635    6489 main.go:141] libmachine: Creating machine...
	I0806 01:12:06.601645    6489 main.go:141] libmachine: (offline-docker-733000) Calling .Create
	I0806 01:12:06.601733    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:06.601865    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:12:06.601729    6693 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:12:06.601913    6489 main.go:141] libmachine: (offline-docker-733000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:12:06.807398    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:12:06.807322    6693 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/id_rsa...
	I0806 01:12:06.902867    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:12:06.902800    6693 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/offline-docker-733000.rawdisk...
	I0806 01:12:06.902877    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Writing magic tar header
	I0806 01:12:06.902888    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Writing SSH key tar header
	I0806 01:12:06.903450    6489 main.go:141] libmachine: (offline-docker-733000) DBG | I0806 01:12:06.903408    6693 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000 ...
	I0806 01:12:07.273999    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:07.274017    6489 main.go:141] libmachine: (offline-docker-733000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/hyperkit.pid
	I0806 01:12:07.274064    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Using UUID 366ac05e-8fe0-495c-be98-20a07e1c8f4c
	I0806 01:12:07.300432    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Generated MAC d6:bd:48:fb:f0:ef
	I0806 01:12:07.300453    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-733000
	I0806 01:12:07.300499    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"366ac05e-8fe0-495c-be98-20a07e1c8f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0806 01:12:07.300537    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"366ac05e-8fe0-495c-be98-20a07e1c8f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"
", process:(*os.Process)(nil)}
	I0806 01:12:07.300581    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "366ac05e-8fe0-495c-be98-20a07e1c8f4c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/offline-docker-733000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage,/Users
/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-733000"}
	I0806 01:12:07.300634    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 366ac05e-8fe0-495c-be98-20a07e1c8f4c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/offline-docker-733000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/off
line-docker-733000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-733000"
	I0806 01:12:07.300650    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:12:07.303590    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 DEBUG: hyperkit: Pid is 6694
	I0806 01:12:07.304070    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 0
	I0806 01:12:07.304085    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:07.304172    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:07.305296    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:07.305382    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:07.305398    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:07.305418    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:07.305429    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:07.305474    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:07.305514    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:07.305531    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:07.305545    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:07.305563    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:07.305580    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:07.305595    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:07.305610    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:07.305654    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:07.305690    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:07.305714    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:07.305727    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:07.305738    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:07.305753    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:07.311286    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:12:07.319463    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/offline-docker-733000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:12:07.320318    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:12:07.320333    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:12:07.320346    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:12:07.320357    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:12:07.695861    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:12:07.695897    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:12:07.810428    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:12:07.810449    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:12:07.810463    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:12:07.810475    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:12:07.811340    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:12:07.811354    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:07 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:12:09.306517    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 1
	I0806 01:12:09.306543    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:09.306580    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:09.307437    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:09.307499    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:09.307515    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:09.307524    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:09.307535    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:09.307543    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:09.307551    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:09.307561    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:09.307568    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:09.307575    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:09.307587    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:09.307601    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:09.307609    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:09.307618    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:09.307627    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:09.307636    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:09.307643    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:09.307651    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:09.307673    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:11.308260    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 2
	I0806 01:12:11.308278    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:11.308350    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:11.309146    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:11.309206    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:11.309225    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:11.309235    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:11.309244    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:11.309253    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:11.309263    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:11.309272    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:11.309278    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:11.309284    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:11.309294    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:11.309305    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:11.309312    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:11.309320    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:11.309336    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:11.309344    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:11.309351    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:11.309357    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:11.309379    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:13.256486    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:13 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0806 01:12:13.256601    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:13 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0806 01:12:13.256610    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:13 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0806 01:12:13.276958    6489 main.go:141] libmachine: (offline-docker-733000) DBG | 2024/08/06 01:12:13 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0806 01:12:13.309901    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 3
	I0806 01:12:13.309924    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:13.310121    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:13.311572    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:13.311690    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:13.311719    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:13.311736    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:13.311748    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:13.311761    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:13.311771    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:13.311812    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:13.311831    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:13.311842    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:13.311857    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:13.311870    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:13.311882    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:13.311903    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:13.311922    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:13.311936    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:13.311946    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:13.311955    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:13.311967    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:15.312928    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 4
	I0806 01:12:15.312946    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:15.312994    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:15.313832    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:15.313870    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:15.313879    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:15.313892    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:15.313899    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:15.313911    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:15.313918    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:15.313924    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:15.313931    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:15.313948    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:15.313961    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:15.313975    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:15.313984    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:15.313991    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:15.314002    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:15.314011    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:15.314020    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:15.314032    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:15.314042    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:17.315763    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 5
	I0806 01:12:17.315778    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:17.315788    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:17.316638    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:17.316694    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:17.316706    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:17.316715    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:17.316721    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:17.316736    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:17.316750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:17.316767    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:17.316778    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:17.316786    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:17.316792    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:17.316804    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:17.316816    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:17.316825    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:17.316834    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:17.316847    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:17.316860    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:17.316878    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:17.316890    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:19.318929    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 6
	I0806 01:12:19.318945    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:19.319015    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:19.320013    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:19.320052    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:19.320063    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:19.320081    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:19.320088    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:19.320096    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:19.320115    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:19.320129    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:19.320137    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:19.320146    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:19.320157    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:19.320167    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:19.320176    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:19.320183    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:19.320192    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:19.320199    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:19.320209    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:19.320215    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:19.320223    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:21.320436    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 7
	I0806 01:12:21.320448    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:21.320594    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:21.321359    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:21.321406    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:21.321419    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:21.321432    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:21.321442    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:21.321455    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:21.321464    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:21.321470    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:21.321477    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:21.321485    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:21.321492    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:21.321498    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:21.321504    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:21.321526    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:21.321538    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:21.321546    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:21.321554    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:21.321561    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:21.321567    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:23.322822    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 8
	I0806 01:12:23.322837    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:23.322980    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:23.323750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:23.323791    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:23.323803    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:23.323814    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:23.323820    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:23.323826    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:23.323832    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:23.323838    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:23.323855    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:23.323867    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:23.323875    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:23.323883    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:23.323893    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:23.323902    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:23.323920    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:23.323939    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:23.323947    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:23.323955    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:23.323964    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:25.325407    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 9
	I0806 01:12:25.325421    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:25.325481    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:25.326302    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:25.326323    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:25.326336    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:25.326346    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:25.326353    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:25.326368    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:25.326376    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:25.326383    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:25.326391    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:25.326399    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:25.326404    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:25.326412    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:25.326420    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:25.326429    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:25.326437    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:25.326447    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:25.326455    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:25.326461    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:25.326471    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:27.327962    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 10
	I0806 01:12:27.327989    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:27.328072    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:27.329065    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:27.329108    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:27.329117    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:27.329126    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:27.329132    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:27.329139    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:27.329144    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:27.329150    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:27.329159    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:27.329166    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:27.329174    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:27.329184    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:27.329192    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:27.329201    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:27.329216    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:27.329223    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:27.329230    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:27.329246    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:27.329259    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:29.331351    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 11
	I0806 01:12:29.331366    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:29.331481    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:29.332298    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:29.332335    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:29.332343    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:29.332355    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:29.332362    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:29.332369    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:29.332376    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:29.332396    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:29.332406    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:29.332416    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:29.332440    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:29.332450    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:29.332464    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:29.332474    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:29.332481    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:29.332488    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:29.332496    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:29.332503    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:29.332512    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:31.334606    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 12
	I0806 01:12:31.334621    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:31.334736    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:31.335560    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:31.335603    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:31.335613    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:31.335633    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:31.335642    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:31.335651    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:31.335659    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:31.335667    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:31.335674    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:31.335680    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:31.335692    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:31.335705    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:31.335721    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:31.335734    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:31.335750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:31.335759    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:31.335767    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:31.335773    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:31.335785    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:33.335991    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 13
	I0806 01:12:33.336007    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:33.336052    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:33.336837    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:33.336886    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:33.336896    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:33.336915    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:33.336925    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:33.336933    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:33.336939    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:33.336946    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:33.336966    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:33.336976    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:33.336983    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:33.336990    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:33.336996    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:33.337003    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:33.337011    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:33.337018    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:33.337028    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:33.337035    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:33.337042    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:35.339143    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 14
	I0806 01:12:35.339158    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:35.339217    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:35.340120    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:35.340159    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:35.340171    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:35.340180    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:35.340186    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:35.340193    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:35.340199    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:35.340207    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:35.340216    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:35.340233    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:35.340242    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:35.340249    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:35.340257    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:35.340270    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:35.340280    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:35.340290    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:35.340301    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:35.340307    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:35.340315    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:37.342112    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 15
	I0806 01:12:37.342128    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:37.342207    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:37.343031    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:37.343093    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:37.343106    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:37.343113    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:37.343120    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:37.343152    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:37.343166    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:37.343175    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:37.343183    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:37.343190    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:37.343196    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:37.343211    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:37.343224    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:37.343232    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:37.343241    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:37.343248    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:37.343257    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:37.343275    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:37.343283    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:39.344391    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 16
	I0806 01:12:39.344406    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:39.344546    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:39.345509    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:39.345551    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:39.345563    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:39.345587    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:39.345598    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:39.345619    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:39.345630    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:39.345638    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:39.345646    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:39.345653    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:39.345667    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:39.345675    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:39.345688    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:39.345702    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:39.345722    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:39.345736    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:39.345750    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:39.345759    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:39.345776    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:41.345981    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 17
	I0806 01:12:41.345996    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:41.346118    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:41.346884    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:41.346930    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:41.346940    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:41.346959    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:41.346968    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:41.346976    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:41.346982    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:41.346988    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:41.346995    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:41.347001    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:41.347009    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:41.347016    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:41.347024    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:41.347041    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:41.347054    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:41.347062    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:41.347071    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:41.347078    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:41.347087    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:43.348454    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 18
	I0806 01:12:43.348471    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:43.348569    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:43.349361    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:43.349406    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:43.349416    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:43.349427    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:43.349436    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:43.349449    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:43.349462    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:43.349469    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:43.349478    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:43.349485    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:43.349492    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:43.349504    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:43.349516    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:43.349525    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:43.349533    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:43.349541    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:43.349549    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:43.349561    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:43.349569    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:45.350030    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 19
	I0806 01:12:45.350044    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:45.350140    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:45.350912    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:45.350955    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:45.350971    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:45.350985    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:45.350997    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:45.351007    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:45.351016    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:45.351023    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:45.351031    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:45.351046    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:45.351059    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:45.351068    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:45.351076    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:45.351092    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:45.351100    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:45.351107    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:45.351115    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:45.351122    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:45.351130    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:47.353152    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 20
	I0806 01:12:47.353167    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:47.353257    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:47.354078    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:47.354114    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:47.354122    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:47.354134    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:47.354144    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:47.354151    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:47.354161    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:47.354176    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:47.354187    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:47.354196    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:47.354213    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:47.354232    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:47.354243    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:47.354251    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:47.354259    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:47.354271    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:47.354280    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:47.354295    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:47.354309    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:49.354373    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 21
	I0806 01:12:49.354388    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:49.354499    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:49.355268    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:49.355307    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:49.355319    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:49.355336    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:49.355351    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:49.355364    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:49.355372    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:49.355389    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:49.355407    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:49.355419    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:49.355429    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:49.355437    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:49.355444    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:49.355450    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:49.355459    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:49.355466    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:49.355473    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:49.355487    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:49.355496    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:51.357539    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 22
	I0806 01:12:51.357552    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:51.357653    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:51.358505    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:51.358551    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:51.358564    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:51.358578    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:51.358587    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:51.358597    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:51.358621    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:51.358631    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:51.358641    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:51.358654    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:51.358664    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:51.358678    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:51.358687    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:51.358694    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:51.358702    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:51.358716    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:51.358726    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:51.358733    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:51.358739    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:53.360125    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 23
	I0806 01:12:53.360150    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:53.360258    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:53.361025    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:53.361102    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:53.361113    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:53.361141    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:53.361149    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:53.361155    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:53.361161    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:53.361166    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:53.361178    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:53.361192    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:53.361199    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:53.361205    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:53.361211    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:53.361217    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:53.361224    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:53.361234    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:53.361242    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:53.361248    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:53.361254    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:55.361491    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 24
	I0806 01:12:55.361504    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:55.361561    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:55.362446    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:55.362467    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:55.362480    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:55.362498    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:55.362524    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:55.362540    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:55.362551    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:55.362569    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:55.362578    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:55.362589    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:55.362595    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:55.362603    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:55.362612    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:55.362619    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:55.362627    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:55.362643    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:55.362655    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:55.362663    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:55.362672    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:57.364008    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 25
	I0806 01:12:57.364024    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:57.364062    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:57.365066    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:57.365130    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:57.365142    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:57.365159    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:57.365173    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:57.365182    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:57.365195    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:57.365202    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:57.365211    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:57.365224    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:57.365232    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:57.365239    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:57.365246    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:57.365253    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:57.365261    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:57.365270    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:57.365278    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:57.365285    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:57.365292    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:59.367390    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 26
	I0806 01:12:59.367402    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:59.367451    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:12:59.368307    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:12:59.368344    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:59.368355    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:59.368363    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:59.368369    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:59.368377    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:59.368385    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:59.368394    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:59.368400    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:59.368407    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:59.368414    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:59.368420    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:59.368426    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:59.368438    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:59.368450    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:59.368463    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:59.368472    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:59.368479    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:59.368488    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:01.369469    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 27
	I0806 01:13:01.369485    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:01.369619    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:13:01.370392    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:13:01.370455    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:01.370465    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:01.370480    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:01.370489    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:01.370496    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:01.370501    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:01.370519    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:01.370531    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:01.370540    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:01.370547    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:01.370561    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:01.370571    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:01.370578    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:01.370585    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:01.370592    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:01.370598    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:01.370606    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:01.370614    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:03.371851    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 28
	I0806 01:13:03.371863    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:03.371917    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:13:03.372710    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:13:03.372766    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:03.372781    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:03.372790    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:03.372797    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:03.372804    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:03.372811    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:03.372818    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:03.372826    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:03.372833    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:03.372839    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:03.372852    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:03.372865    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:03.372873    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:03.372881    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:03.372895    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:03.372903    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:03.372914    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:03.372922    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:05.374979    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Attempt 29
	I0806 01:13:05.375002    6489 main.go:141] libmachine: (offline-docker-733000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:05.375089    6489 main.go:141] libmachine: (offline-docker-733000) DBG | hyperkit pid from json: 6694
	I0806 01:13:05.376010    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Searching for d6:bd:48:fb:f0:ef in /var/db/dhcpd_leases ...
	I0806 01:13:05.376059    6489 main.go:141] libmachine: (offline-docker-733000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:05.376080    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:05.376100    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:05.376111    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:05.376120    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:05.376126    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:05.376134    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:05.376149    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:05.376159    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:05.376168    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:05.376175    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:05.376182    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:05.376200    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:05.376212    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:05.376222    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:05.376231    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:05.376238    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:05.376244    6489 main.go:141] libmachine: (offline-docker-733000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:07.376915    6489 client.go:171] duration metric: took 1m0.827097058s to LocalClient.Create
	I0806 01:13:09.379042    6489 start.go:128] duration metric: took 1m2.860644449s to createHost
	I0806 01:13:09.379082    6489 start.go:83] releasing machines lock for "offline-docker-733000", held for 1m2.860814438s
	W0806 01:13:09.379202    6489 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-733000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:bd:48:fb:f0:ef
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-733000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:bd:48:fb:f0:ef
	I0806 01:13:09.442331    6489 out.go:177] 
	W0806 01:13:09.463497    6489 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:bd:48:fb:f0:ef
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for d6:bd:48:fb:f0:ef
	W0806 01:13:09.463511    6489 out.go:239] * 
	* 
	W0806 01:13:09.464190    6489 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:13:09.526428    6489 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-733000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-06 01:13:09.612779 -0700 PDT m=+4158.781942044
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-733000 -n offline-docker-733000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-733000 -n offline-docker-733000: exit status 7 (81.505943ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:13:09.692284    6717 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:13:09.692307    6717 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-733000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-733000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-733000: (5.250928566s)
--- FAIL: TestOffline (195.15s)

                                                
                                    
x
+
TestCertOptions (251.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-351000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0806 01:19:43.548432    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:19:45.494763    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:20:11.243686    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-351000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m5.996649655s)

                                                
                                                
-- stdout --
	* [cert-options-351000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-351000" primary control-plane node in "cert-options-351000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-351000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 6a:c4:a:bd:bc:a9
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-351000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:46:cd:69:39:47
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ce:46:cd:69:39:47
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-351000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-351000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-351000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (160.520607ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-351000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-351000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-351000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-351000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-351000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (159.882244ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-351000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-351000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-351000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-06 01:22:36.680399 -0700 PDT m=+4725.839677616
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-351000 -n cert-options-351000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-351000 -n cert-options-351000: exit status 7 (77.426571ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:22:36.756137    6975 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:22:36.756158    6975 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-351000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-351000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-351000
E0806 01:22:41.506037    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-351000: (5.230466837s)
--- FAIL: TestCertOptions (251.66s)

                                                
                                    
x
+
TestCertExpiration (1705.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0806 01:17:27.399479    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:17:41.499120    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:18:22.438756    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m6.380877455s)

                                                
                                                
-- stdout --
	* [cert-expiration-490000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-490000" primary control-plane node in "cert-expiration-490000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-490000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 62:4d:3d:d5:2f:2e
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-490000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:bf:88:0:ff:c2
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:bf:88:0:ff:c2
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0806 01:24:43.592262    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m13.956727269s)

                                                
                                                
-- stdout --
	* [cert-expiration-490000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-490000" primary control-plane node in "cert-expiration-490000" cluster
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-490000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-490000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-490000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-490000" primary control-plane node in "cert-expiration-490000" cluster
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	* Updating the running hyperkit "cert-expiration-490000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-490000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-06 01:45:47.382493 -0700 PDT m=+6116.535294108
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-490000 -n cert-expiration-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-490000 -n cert-expiration-490000: exit status 7 (78.883017ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:45:47.459518    8488 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:45:47.459539    8488 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-490000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-490000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-490000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-490000: (5.26352724s)
--- FAIL: TestCertExpiration (1705.68s)

                                                
                                    
x
+
TestDockerFlags (252.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-346000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0806 01:14:43.542782    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:43.549200    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:43.560391    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:43.582483    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:43.624538    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:43.706615    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:43.866723    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:44.187121    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:44.828267    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:46.110367    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:48.671286    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:14:53.792707    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:15:04.033171    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:15:24.514554    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:16:05.477436    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-346000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.289128314s)

                                                
                                                
-- stdout --
	* [docker-flags-346000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-346000" primary control-plane node in "docker-flags-346000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-346000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:14:18.313479    6760 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:14:18.313749    6760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:14:18.313755    6760 out.go:304] Setting ErrFile to fd 2...
	I0806 01:14:18.313758    6760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:14:18.313917    6760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 01:14:18.315465    6760 out.go:298] Setting JSON to false
	I0806 01:14:18.338146    6760 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4420,"bootTime":1722927638,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 01:14:18.338240    6760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:14:18.359080    6760 out.go:177] * [docker-flags-346000] minikube v1.33.1 on Darwin 14.5
	I0806 01:14:18.401950    6760 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:14:18.401971    6760 notify.go:220] Checking for updates...
	I0806 01:14:18.443496    6760 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:14:18.463839    6760 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 01:14:18.486826    6760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:14:18.507572    6760 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:14:18.527782    6760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:14:18.549292    6760 config.go:182] Loaded profile config "force-systemd-flag-672000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:14:18.549384    6760 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:14:18.577811    6760 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 01:14:18.618801    6760 start.go:297] selected driver: hyperkit
	I0806 01:14:18.618813    6760 start.go:901] validating driver "hyperkit" against <nil>
	I0806 01:14:18.618824    6760 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:14:18.621943    6760 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:14:18.622069    6760 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 01:14:18.630451    6760 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 01:14:18.634259    6760 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:14:18.634281    6760 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 01:14:18.634320    6760 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:14:18.634507    6760 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0806 01:14:18.634533    6760 cni.go:84] Creating CNI manager for ""
	I0806 01:14:18.634551    6760 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:14:18.634557    6760 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:14:18.634628    6760 start.go:340] cluster config:
	{Name:docker-flags-346000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-346000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:14:18.634723    6760 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:14:18.655638    6760 out.go:177] * Starting "docker-flags-346000" primary control-plane node in "docker-flags-346000" cluster
	I0806 01:14:18.676786    6760 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:14:18.676828    6760 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 01:14:18.676840    6760 cache.go:56] Caching tarball of preloaded images
	I0806 01:14:18.676947    6760 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 01:14:18.676956    6760 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:14:18.677065    6760 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/docker-flags-346000/config.json ...
	I0806 01:14:18.677081    6760 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/docker-flags-346000/config.json: {Name:mkd88fcd25ff41ccf4c045010410c585afd3fde2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:14:18.677427    6760 start.go:360] acquireMachinesLock for docker-flags-346000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:15:15.619232    6760 start.go:364] duration metric: took 56.940798314s to acquireMachinesLock for "docker-flags-346000"
	I0806 01:15:15.619271    6760 start.go:93] Provisioning new machine with config: &{Name:docker-flags-346000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-346000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:15:15.619329    6760 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:15:15.640794    6760 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:15:15.640922    6760 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:15:15.640955    6760 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:15:15.649462    6760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53747
	I0806 01:15:15.649821    6760 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:15:15.650253    6760 main.go:141] libmachine: Using API Version  1
	I0806 01:15:15.650264    6760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:15:15.650558    6760 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:15:15.650690    6760 main.go:141] libmachine: (docker-flags-346000) Calling .GetMachineName
	I0806 01:15:15.650792    6760 main.go:141] libmachine: (docker-flags-346000) Calling .DriverName
	I0806 01:15:15.650910    6760 start.go:159] libmachine.API.Create for "docker-flags-346000" (driver="hyperkit")
	I0806 01:15:15.650932    6760 client.go:168] LocalClient.Create starting
	I0806 01:15:15.650974    6760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:15:15.651027    6760 main.go:141] libmachine: Decoding PEM data...
	I0806 01:15:15.651044    6760 main.go:141] libmachine: Parsing certificate...
	I0806 01:15:15.651102    6760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:15:15.651140    6760 main.go:141] libmachine: Decoding PEM data...
	I0806 01:15:15.651152    6760 main.go:141] libmachine: Parsing certificate...
	I0806 01:15:15.651165    6760 main.go:141] libmachine: Running pre-create checks...
	I0806 01:15:15.651175    6760 main.go:141] libmachine: (docker-flags-346000) Calling .PreCreateCheck
	I0806 01:15:15.651258    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:15.651438    6760 main.go:141] libmachine: (docker-flags-346000) Calling .GetConfigRaw
	I0806 01:15:15.681645    6760 main.go:141] libmachine: Creating machine...
	I0806 01:15:15.681656    6760 main.go:141] libmachine: (docker-flags-346000) Calling .Create
	I0806 01:15:15.681751    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:15.681901    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:15:15.681755    6798 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:15:15.681961    6760 main.go:141] libmachine: (docker-flags-346000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:15:15.887075    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:15:15.886983    6798 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/id_rsa...
	I0806 01:15:15.985567    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:15:15.985493    6798 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/docker-flags-346000.rawdisk...
	I0806 01:15:15.985579    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Writing magic tar header
	I0806 01:15:15.985587    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Writing SSH key tar header
	I0806 01:15:15.986158    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:15:15.986116    6798 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000 ...
	I0806 01:15:16.359432    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:16.359454    6760 main.go:141] libmachine: (docker-flags-346000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/hyperkit.pid
	I0806 01:15:16.359498    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Using UUID 2b5f4de0-2f08-46c0-a5a4-7b93a25cc72f
	I0806 01:15:16.384543    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Generated MAC 4e:4a:bb:a2:1d:ab
	I0806 01:15:16.384559    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-346000
	I0806 01:15:16.384611    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2b5f4de0-2f08-46c0-a5a4-7b93a25cc72f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0806 01:15:16.384639    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2b5f4de0-2f08-46c0-a5a4-7b93a25cc72f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0806 01:15:16.384674    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2b5f4de0-2f08-46c0-a5a4-7b93a25cc72f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/docker-flags-346000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage,/Users/jenkins/minikub
e-integration/19370-944/.minikube/machines/docker-flags-346000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-346000"}
	I0806 01:15:16.384702    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2b5f4de0-2f08-46c0-a5a4-7b93a25cc72f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/docker-flags-346000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000
/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-346000"
	I0806 01:15:16.384744    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:15:16.387771    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 DEBUG: hyperkit: Pid is 6799
	I0806 01:15:16.388810    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 0
	I0806 01:15:16.388823    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:16.388888    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:16.390010    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:16.390074    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:16.390094    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:16.390104    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:16.390120    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:16.390129    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:16.390145    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:16.390155    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:16.390164    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:16.390176    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:16.390187    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:16.390195    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:16.390208    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:16.390217    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:16.390225    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:16.390244    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:16.390261    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:16.390277    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:16.390290    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:16.395187    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:15:16.403344    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:15:16.404278    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:15:16.404312    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:15:16.404328    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:15:16.404348    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:15:16.780361    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:15:16.780380    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:15:16.894948    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:15:16.894964    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:15:16.894976    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:15:16.894997    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:15:16.895852    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:15:16.895874    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:16 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:15:18.392340    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 1
	I0806 01:15:18.392354    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:18.392453    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:18.393304    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:18.393379    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:18.393390    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:18.393402    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:18.393409    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:18.393415    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:18.393422    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:18.393430    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:18.393455    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:18.393465    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:18.393472    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:18.393478    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:18.393485    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:18.393493    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:18.393510    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:18.393527    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:18.393539    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:18.393547    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:18.393557    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:20.394433    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 2
	I0806 01:15:20.394456    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:20.394526    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:20.395291    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:20.395347    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:20.395361    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:20.395384    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:20.395394    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:20.395401    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:20.395410    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:20.395427    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:20.395447    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:20.395465    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:20.395479    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:20.395488    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:20.395496    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:20.395503    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:20.395509    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:20.395515    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:20.395522    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:20.395530    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:20.395539    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:22.264995    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:22 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 01:15:22.265116    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:22 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 01:15:22.265125    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:22 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 01:15:22.284959    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:15:22 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 01:15:22.397124    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 3
	I0806 01:15:22.397150    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:22.397364    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:22.398821    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:22.398946    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:22.398966    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:22.398980    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:22.398992    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:22.399009    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:22.399036    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:22.399047    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:22.399054    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:22.399063    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:22.399075    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:22.399085    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:22.399095    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:22.399103    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:22.399124    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:22.399151    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:22.399160    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:22.399172    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:22.399183    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:24.400583    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 4
	I0806 01:15:24.400599    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:24.400701    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:24.401477    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:24.401543    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:24.401554    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:24.401561    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:24.401567    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:24.401598    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:24.401609    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:24.401627    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:24.401641    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:24.401650    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:24.401662    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:24.401678    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:24.401691    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:24.401703    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:24.401713    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:24.401728    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:24.401737    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:24.401747    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:24.401757    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:26.401761    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 5
	I0806 01:15:26.401775    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:26.401857    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:26.402635    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:26.402686    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:26.402694    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:26.402702    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:26.402714    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:26.402721    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:26.402730    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:26.402736    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:26.402747    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:26.402759    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:26.402767    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:26.402776    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:26.402785    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:26.402792    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:26.402800    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:26.402806    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:26.402812    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:26.402824    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:26.402836    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:28.404914    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 6
	I0806 01:15:28.404926    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:28.404967    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:28.405739    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:28.405776    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:28.405785    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:28.405794    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:28.405805    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:28.405812    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:28.405819    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:28.405828    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:28.405842    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:28.405850    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:28.405871    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:28.405883    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:28.405897    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:28.405910    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:28.405918    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:28.405934    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:28.405940    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:28.405947    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:28.405955    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:30.407449    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 7
	I0806 01:15:30.407464    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:30.407565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:30.408316    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:30.408365    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:30.408378    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:30.408391    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:30.408400    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:30.408407    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:30.408415    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:30.408427    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:30.408448    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:30.408456    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:30.408474    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:30.408488    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:30.408498    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:30.408506    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:30.408514    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:30.408522    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:30.408529    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:30.408536    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:30.408543    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:32.409120    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 8
	I0806 01:15:32.409133    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:32.409231    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:32.409997    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:32.410045    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:32.410052    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:32.410075    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:32.410088    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:32.410096    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:32.410102    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:32.410108    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:32.410125    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:32.410134    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:32.410142    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:32.410150    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:32.410156    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:32.410163    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:32.410169    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:32.410175    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:32.410183    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:32.410190    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:32.410198    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:34.411502    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 9
	I0806 01:15:34.411529    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:34.411597    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:34.412389    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:34.412403    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:34.412427    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:34.412450    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:34.412459    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:34.412465    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:34.412474    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:34.412482    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:34.412493    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:34.412499    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:34.412506    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:34.412516    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:34.412523    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:34.412530    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:34.412540    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:34.412548    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:34.412560    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:34.412572    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:34.412590    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:36.414013    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 10
	I0806 01:15:36.414030    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:36.414134    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:36.414926    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:36.414975    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:36.414986    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:36.414995    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:36.415015    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:36.415024    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:36.415037    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:36.415045    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:36.415051    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:36.415056    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:36.415066    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:36.415076    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:36.415085    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:36.415093    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:36.415099    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:36.415107    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:36.415113    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:36.415121    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:36.415137    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:38.415473    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 11
	I0806 01:15:38.415502    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:38.415555    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:38.416320    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:38.416377    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:38.416388    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:38.416414    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:38.416423    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:38.416439    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:38.416448    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:38.416455    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:38.416470    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:38.416479    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:38.416487    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:38.416495    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:38.416503    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:38.416510    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:38.416516    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:38.416529    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:38.416542    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:38.416553    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:38.416560    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:40.418636    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 12
	I0806 01:15:40.418652    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:40.418713    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:40.419517    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:40.419544    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:40.419553    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:40.419565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:40.419571    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:40.419579    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:40.419587    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:40.419596    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:40.419602    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:40.419609    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:40.419617    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:40.419637    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:40.419649    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:40.419658    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:40.419671    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:40.419678    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:40.419685    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:40.419701    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:40.419716    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:42.421807    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 13
	I0806 01:15:42.421820    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:42.421895    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:42.422801    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:42.422859    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:42.422869    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:42.422878    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:42.422886    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:42.422895    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:42.422906    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:42.422912    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:42.422920    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:42.422926    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:42.422933    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:42.422949    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:42.422957    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:42.422964    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:42.422970    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:42.422977    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:42.422983    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:42.422991    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:42.422999    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:44.423393    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 14
	I0806 01:15:44.423407    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:44.423472    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:44.424222    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:44.424280    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:44.424293    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:44.424320    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:44.424336    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:44.424343    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:44.424350    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:44.424358    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:44.424364    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:44.424372    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:44.424386    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:44.424400    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:44.424408    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:44.424416    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:44.424423    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:44.424432    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:44.424438    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:44.424446    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:44.424458    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:46.425953    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 15
	I0806 01:15:46.425966    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:46.426080    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:46.426835    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:46.426887    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:46.426906    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:46.426923    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:46.426931    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:46.426943    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:46.426951    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:46.426958    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:46.426965    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:46.426975    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:46.426981    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:46.426988    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:46.426997    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:46.427004    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:46.427010    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:46.427024    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:46.427035    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:46.427055    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:46.427068    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:48.427625    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 16
	I0806 01:15:48.427641    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:48.427713    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:48.428528    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:48.428572    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:48.428580    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:48.428590    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:48.428602    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:48.428615    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:48.428639    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:48.428646    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:48.428663    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:48.428678    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:48.428690    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:48.428698    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:48.428710    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:48.428718    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:48.428725    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:48.428734    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:48.428740    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:48.428748    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:48.428757    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:50.429547    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 17
	I0806 01:15:50.429560    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:50.429649    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:50.430469    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:50.430492    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:50.430510    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:50.430522    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:50.430530    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:50.430536    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:50.430548    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:50.430556    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:50.430565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:50.430587    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:50.430604    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:50.430612    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:50.430619    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:50.430635    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:50.430647    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:50.430671    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:50.430683    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:50.430693    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:50.430702    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:52.430993    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 18
	I0806 01:15:52.431023    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:52.431083    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:52.431839    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:52.431913    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:52.431923    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:52.431932    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:52.431940    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:52.431948    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:52.431954    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:52.431961    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:52.431969    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:52.431985    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:52.431998    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:52.432006    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:52.432015    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:52.432021    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:52.432029    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:52.432042    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:52.432048    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:52.432055    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:52.432063    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:54.433394    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 19
	I0806 01:15:54.433409    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:54.433542    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:54.434324    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:54.434367    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:54.434376    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:54.434383    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:54.434389    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:54.434405    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:54.434418    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:54.434436    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:54.434447    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:54.434455    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:54.434464    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:54.434472    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:54.434480    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:54.434498    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:54.434506    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:54.434514    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:54.434522    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:54.434530    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:54.434537    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:56.436538    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 20
	I0806 01:15:56.436555    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:56.436610    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:56.437469    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:56.437514    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:56.437527    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:56.437536    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:56.437542    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:56.437553    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:56.437561    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:56.437576    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:56.437593    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:56.437609    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:56.437617    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:56.437625    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:56.437633    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:56.437642    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:56.437648    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:56.437660    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:56.437672    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:56.437680    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:56.437688    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:58.438610    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 21
	I0806 01:15:58.438621    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:58.438693    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:15:58.439637    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:15:58.439660    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:58.439677    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:58.439688    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:58.439694    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:58.439700    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:58.439719    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:58.439728    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:58.439736    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:58.439743    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:58.439749    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:58.439756    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:58.439763    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:58.439772    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:58.439779    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:58.439785    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:58.439792    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:58.439805    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:58.439819    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:00.440916    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 22
	I0806 01:16:00.440931    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:00.440975    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:00.441803    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:00.441849    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:00.441860    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:00.441873    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:00.441881    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:00.441887    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:00.441895    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:00.441907    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:00.441916    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:00.441923    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:00.441929    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:00.441935    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:00.441942    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:00.441948    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:00.441962    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:00.441974    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:00.441996    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:00.442008    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:00.442018    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:02.443636    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 23
	I0806 01:16:02.443651    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:02.443702    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:02.444551    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:02.444601    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:02.444612    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:02.444622    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:02.444632    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:02.444641    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:02.444651    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:02.444666    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:02.444679    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:02.444690    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:02.444699    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:02.444706    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:02.444714    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:02.444726    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:02.444734    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:02.444751    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:02.444764    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:02.444773    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:02.444780    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:04.445667    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 24
	I0806 01:16:04.445681    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:04.445804    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:04.446636    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:04.446673    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:04.446684    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:04.446705    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:04.446715    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:04.446722    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:04.446728    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:04.446734    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:04.446743    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:04.446757    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:04.446770    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:04.446785    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:04.446798    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:04.446808    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:04.446816    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:04.446823    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:04.446829    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:04.446835    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:04.446846    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:06.448845    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 25
	I0806 01:16:06.448857    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:06.448939    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:06.450030    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:06.450082    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:06.450097    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:06.450109    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:06.450116    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:06.450145    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:06.450159    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:06.450166    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:06.450172    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:06.450178    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:06.450185    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:06.450193    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:06.450210    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:06.450223    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:06.450240    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:06.450248    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:06.450255    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:06.450264    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:06.450272    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:08.450436    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 26
	I0806 01:16:08.450448    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:08.450505    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:08.451326    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:08.451370    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:08.451382    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:08.451391    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:08.451397    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:08.451404    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:08.451410    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:08.451416    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:08.451422    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:08.451435    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:08.451444    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:08.451460    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:08.451472    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:08.451484    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:08.451490    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:08.451496    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:08.451503    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:08.451509    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:08.451515    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:10.453596    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 27
	I0806 01:16:10.453613    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:10.453694    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:10.454510    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:10.454547    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:10.454559    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:10.454568    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:10.454575    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:10.454581    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:10.454588    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:10.454605    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:10.454612    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:10.454619    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:10.454627    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:10.454639    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:10.454647    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:10.454654    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:10.454662    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:10.454669    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:10.454676    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:10.454685    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:10.454696    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:12.456519    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 28
	I0806 01:16:12.456545    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:12.456627    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:12.457372    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:12.457416    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:12.457428    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:12.457437    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:12.457443    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:12.457455    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:12.457466    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:12.457474    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:12.457483    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:12.457490    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:12.457495    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:12.457502    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:12.457510    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:12.457527    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:12.457539    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:12.457548    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:12.457556    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:12.457569    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:12.457582    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:14.458002    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 29
	I0806 01:16:14.458017    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:14.458125    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:14.458914    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 4e:4a:bb:a2:1d:ab in /var/db/dhcpd_leases ...
	I0806 01:16:14.458978    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:14.458989    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:14.458999    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:14.459008    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:14.459019    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:14.459025    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:14.459033    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:14.459041    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:14.459062    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:14.459075    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:14.459090    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:14.459103    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:14.459120    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:14.459130    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:14.459136    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:14.459146    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:14.459154    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:14.459162    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:16.460681    6760 client.go:171] duration metric: took 1m0.808677952s to LocalClient.Create
	I0806 01:16:18.462112    6760 start.go:128] duration metric: took 1m2.841672074s to createHost
	I0806 01:16:18.462132    6760 start.go:83] releasing machines lock for "docker-flags-346000", held for 1m2.841792158s
	W0806 01:16:18.462150    6760 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:4a:bb:a2:1d:ab
	I0806 01:16:18.462458    6760 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:16:18.462500    6760 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:16:18.471146    6760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53749
	I0806 01:16:18.471483    6760 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:16:18.471861    6760 main.go:141] libmachine: Using API Version  1
	I0806 01:16:18.471879    6760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:16:18.472085    6760 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:16:18.472531    6760 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:16:18.472554    6760 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:16:18.480979    6760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53751
	I0806 01:16:18.481316    6760 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:16:18.481662    6760 main.go:141] libmachine: Using API Version  1
	I0806 01:16:18.481681    6760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:16:18.481884    6760 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:16:18.482031    6760 main.go:141] libmachine: (docker-flags-346000) Calling .GetState
	I0806 01:16:18.482125    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:18.482195    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:18.483124    6760 main.go:141] libmachine: (docker-flags-346000) Calling .DriverName
	I0806 01:16:18.504398    6760 out.go:177] * Deleting "docker-flags-346000" in hyperkit ...
	I0806 01:16:18.546188    6760 main.go:141] libmachine: (docker-flags-346000) Calling .Remove
	I0806 01:16:18.546361    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:18.546378    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:18.546442    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:18.547369    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:18.547422    6760 main.go:141] libmachine: (docker-flags-346000) DBG | waiting for graceful shutdown
	I0806 01:16:19.547623    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:19.547710    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:19.548634    6760 main.go:141] libmachine: (docker-flags-346000) DBG | waiting for graceful shutdown
	I0806 01:16:20.549463    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:20.549585    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:20.551182    6760 main.go:141] libmachine: (docker-flags-346000) DBG | waiting for graceful shutdown
	I0806 01:16:21.552133    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:21.552222    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:21.552965    6760 main.go:141] libmachine: (docker-flags-346000) DBG | waiting for graceful shutdown
	I0806 01:16:22.555129    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:22.555228    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:22.555753    6760 main.go:141] libmachine: (docker-flags-346000) DBG | waiting for graceful shutdown
	I0806 01:16:23.556995    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:23.557123    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6799
	I0806 01:16:23.558340    6760 main.go:141] libmachine: (docker-flags-346000) DBG | sending sigkill
	I0806 01:16:23.558359    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:23.570487    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:16:23 WARN : hyperkit: failed to read stderr: EOF
	I0806 01:16:23.570508    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:16:23 WARN : hyperkit: failed to read stdout: EOF
	W0806 01:16:23.595041    6760 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:4a:bb:a2:1d:ab
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:4a:bb:a2:1d:ab
	I0806 01:16:23.595075    6760 start.go:729] Will try again in 5 seconds ...
	I0806 01:16:28.595343    6760 start.go:360] acquireMachinesLock for docker-flags-346000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:17:21.260954    6760 start.go:364] duration metric: took 52.664666642s to acquireMachinesLock for "docker-flags-346000"
	I0806 01:17:21.260978    6760 start.go:93] Provisioning new machine with config: &{Name:docker-flags-346000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-346000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:17:21.261053    6760 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:17:21.282373    6760 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:17:21.282438    6760 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:17:21.282469    6760 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:17:21.290986    6760 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53755
	I0806 01:17:21.291310    6760 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:17:21.291643    6760 main.go:141] libmachine: Using API Version  1
	I0806 01:17:21.291654    6760 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:17:21.291923    6760 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:17:21.292234    6760 main.go:141] libmachine: (docker-flags-346000) Calling .GetMachineName
	I0806 01:17:21.292352    6760 main.go:141] libmachine: (docker-flags-346000) Calling .DriverName
	I0806 01:17:21.292480    6760 start.go:159] libmachine.API.Create for "docker-flags-346000" (driver="hyperkit")
	I0806 01:17:21.292499    6760 client.go:168] LocalClient.Create starting
	I0806 01:17:21.292529    6760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:17:21.292585    6760 main.go:141] libmachine: Decoding PEM data...
	I0806 01:17:21.292597    6760 main.go:141] libmachine: Parsing certificate...
	I0806 01:17:21.292642    6760 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:17:21.292685    6760 main.go:141] libmachine: Decoding PEM data...
	I0806 01:17:21.292695    6760 main.go:141] libmachine: Parsing certificate...
	I0806 01:17:21.292712    6760 main.go:141] libmachine: Running pre-create checks...
	I0806 01:17:21.292717    6760 main.go:141] libmachine: (docker-flags-346000) Calling .PreCreateCheck
	I0806 01:17:21.292833    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:21.292873    6760 main.go:141] libmachine: (docker-flags-346000) Calling .GetConfigRaw
	I0806 01:17:21.324118    6760 main.go:141] libmachine: Creating machine...
	I0806 01:17:21.324126    6760 main.go:141] libmachine: (docker-flags-346000) Calling .Create
	I0806 01:17:21.324230    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:21.324403    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:17:21.324222    6831 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:17:21.324467    6760 main.go:141] libmachine: (docker-flags-346000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:17:21.748103    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:17:21.748021    6831 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/id_rsa...
	I0806 01:17:21.867751    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:17:21.867703    6831 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/docker-flags-346000.rawdisk...
	I0806 01:17:21.867771    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Writing magic tar header
	I0806 01:17:21.867794    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Writing SSH key tar header
	I0806 01:17:21.868073    6760 main.go:141] libmachine: (docker-flags-346000) DBG | I0806 01:17:21.868039    6831 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000 ...
	I0806 01:17:22.284624    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:22.284682    6760 main.go:141] libmachine: (docker-flags-346000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/hyperkit.pid
	I0806 01:17:22.284702    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Using UUID 755a47ca-a942-4200-9f30-909c494b080f
	I0806 01:17:22.310042    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Generated MAC 12:d2:aa:6f:3a:eb
	I0806 01:17:22.310058    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-346000
	I0806 01:17:22.310094    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"755a47ca-a942-4200-9f30-909c494b080f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0806 01:17:22.310131    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"755a47ca-a942-4200-9f30-909c494b080f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process
:(*os.Process)(nil)}
	I0806 01:17:22.310182    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "755a47ca-a942-4200-9f30-909c494b080f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/docker-flags-346000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage,/Users/jenkins/minikub
e-integration/19370-944/.minikube/machines/docker-flags-346000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-346000"}
	I0806 01:17:22.310224    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 755a47ca-a942-4200-9f30-909c494b080f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/docker-flags-346000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000
/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-346000"
	I0806 01:17:22.310240    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:17:22.313146    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 DEBUG: hyperkit: Pid is 6847
	I0806 01:17:22.314763    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 0
	I0806 01:17:22.314778    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:22.314866    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:22.315859    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:22.315937    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:22.315954    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:22.315984    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:22.315999    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:22.316010    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:22.316018    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:22.316038    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:22.316048    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:22.316083    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:22.316098    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:22.316117    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:22.316128    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:22.316143    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:22.316165    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:22.316181    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:22.316197    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:22.316209    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:22.316223    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:22.321527    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:17:22.329647    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/docker-flags-346000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:17:22.330533    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:17:22.330566    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:17:22.330581    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:17:22.330605    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:17:22.711288    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:17:22.711305    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:17:22.825880    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:17:22.825901    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:17:22.825927    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:17:22.825941    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:17:22.826788    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:17:22.826802    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:22 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:17:24.317341    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 1
	I0806 01:17:24.317359    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:24.317369    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:24.318259    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:24.318334    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:24.318346    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:24.318354    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:24.318362    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:24.318370    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:24.318376    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:24.318385    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:24.318394    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:24.318413    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:24.318426    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:24.318441    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:24.318457    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:24.318465    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:24.318473    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:24.318485    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:24.318494    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:24.318502    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:24.318519    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:26.318752    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 2
	I0806 01:17:26.318774    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:26.318899    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:26.319674    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:26.319716    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:26.319732    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:26.319747    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:26.319758    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:26.319768    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:26.319781    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:26.319797    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:26.319809    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:26.319822    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:26.319830    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:26.319845    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:26.319856    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:26.319864    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:26.319872    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:26.319893    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:26.319906    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:26.319914    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:26.319923    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:28.221396    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:28 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0806 01:17:28.221518    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:28 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0806 01:17:28.221528    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:28 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0806 01:17:28.242224    6760 main.go:141] libmachine: (docker-flags-346000) DBG | 2024/08/06 01:17:28 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0806 01:17:28.319935    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 3
	I0806 01:17:28.319963    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:28.320131    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:28.321535    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:28.321625    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:28.321650    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:28.321682    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:28.321691    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:28.321700    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:28.321709    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:28.321718    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:28.321729    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:28.321739    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:28.321776    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:28.321795    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:28.321804    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:28.321816    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:28.321832    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:28.321843    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:28.321854    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:28.321864    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:28.321872    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:30.323835    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 4
	I0806 01:17:30.323851    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:30.323935    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:30.324729    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:30.324773    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:30.324787    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:30.324796    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:30.324805    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:30.324817    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:30.324824    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:30.324830    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:30.324836    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:30.324861    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:30.324883    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:30.324896    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:30.324906    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:30.324917    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:30.324924    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:30.324933    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:30.324940    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:30.324946    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:30.324953    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:32.327041    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 5
	I0806 01:17:32.327056    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:32.327097    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:32.327922    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:32.327960    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:32.327968    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:32.327993    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:32.328007    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:32.328016    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:32.328023    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:32.328034    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:32.328045    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:32.328053    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:32.328074    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:32.328088    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:32.328098    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:32.328114    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:32.328132    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:32.328141    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:32.328147    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:32.328159    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:32.328173    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:34.330190    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 6
	I0806 01:17:34.330206    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:34.330340    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:34.331152    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:34.331195    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:34.331207    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:34.331215    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:34.331221    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:34.331239    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:34.331251    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:34.331262    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:34.331279    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:34.331293    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:34.331304    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:34.331322    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:34.331330    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:34.331337    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:34.331343    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:34.331353    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:34.331361    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:34.331370    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:34.331378    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:36.332026    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 7
	I0806 01:17:36.332039    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:36.332079    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:36.332874    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:36.332893    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:36.332908    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:36.332922    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:36.332931    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:36.332962    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:36.332970    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:36.332978    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:36.332997    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:36.333009    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:36.333016    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:36.333023    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:36.333043    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:36.333053    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:36.333058    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:36.333070    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:36.333088    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:36.333100    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:36.333110    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:38.333235    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 8
	I0806 01:17:38.333249    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:38.333313    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:38.334179    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:38.334192    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:38.334204    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:38.334211    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:38.334219    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:38.334224    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:38.334230    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:38.334236    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:38.334242    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:38.334248    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:38.334265    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:38.334276    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:38.334303    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:38.334319    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:38.334327    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:38.334341    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:38.334359    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:38.334372    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:38.334382    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:40.335550    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 9
	I0806 01:17:40.335565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:40.335641    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:40.336473    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:40.336484    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:40.336492    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:40.336498    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:40.336524    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:40.336545    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:40.336565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:40.336578    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:40.336598    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:40.336613    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:40.336620    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:40.336628    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:40.336640    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:40.336649    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:40.336656    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:40.336665    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:40.336671    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:40.336680    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:40.336703    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:42.338722    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 10
	I0806 01:17:42.338738    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:42.338771    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:42.339747    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:42.339787    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:42.339797    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:42.339805    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:42.339812    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:42.339818    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:42.339824    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:42.339830    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:42.339836    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:42.339843    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:42.339850    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:42.339867    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:42.339878    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:42.339885    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:42.339894    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:42.339903    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:42.339911    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:42.339918    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:42.339924    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:44.342023    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 11
	I0806 01:17:44.342037    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:44.342094    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:44.342911    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:44.342958    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:44.342976    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:44.342990    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:44.342998    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:44.343006    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:44.343013    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:44.343025    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:44.343037    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:44.343045    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:44.343051    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:44.343057    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:44.343064    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:44.343072    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:44.343080    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:44.343086    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:44.343094    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:44.343100    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:44.343106    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:46.343652    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 12
	I0806 01:17:46.343676    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:46.343738    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:46.344539    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:46.344585    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:46.344594    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:46.344610    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:46.344616    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:46.344622    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:46.344630    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:46.344637    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:46.344643    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:46.344652    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:46.344658    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:46.344668    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:46.344676    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:46.344689    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:46.344696    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:46.344703    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:46.344716    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:46.344730    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:46.344740    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:48.345198    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 13
	I0806 01:17:48.345211    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:48.345258    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:48.346081    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:48.346123    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:48.346132    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:48.346142    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:48.346151    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:48.346159    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:48.346168    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:48.346177    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:48.346190    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:48.346208    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:48.346216    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:48.346232    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:48.346249    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:48.346257    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:48.346265    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:48.346271    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:48.346278    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:48.346288    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:48.346298    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:50.348374    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 14
	I0806 01:17:50.348386    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:50.348423    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:50.349251    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:50.349304    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:50.349317    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:50.349327    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:50.349344    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:50.349351    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:50.349365    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:50.349372    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:50.349379    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:50.349389    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:50.349405    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:50.349418    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:50.349426    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:50.349435    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:50.349447    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:50.349453    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:50.349461    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:50.349472    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:50.349488    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:52.351550    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 15
	I0806 01:17:52.351565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:52.351639    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:52.352436    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:52.352486    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:52.352498    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:52.352527    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:52.352545    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:52.352554    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:52.352565    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:52.352575    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:52.352584    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:52.352590    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:52.352597    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:52.352605    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:52.352612    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:52.352617    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:52.352633    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:52.352650    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:52.352659    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:52.352666    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:52.352674    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:54.353094    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 16
	I0806 01:17:54.353109    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:54.353192    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:54.353998    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:54.354044    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:54.354055    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:54.354063    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:54.354069    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:54.354083    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:54.354095    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:54.354116    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:54.354124    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:54.354132    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:54.354140    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:54.354147    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:54.354157    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:54.354167    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:54.354174    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:54.354181    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:54.354187    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:54.354193    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:54.354203    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:56.354740    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 17
	I0806 01:17:56.354755    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:56.354818    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:56.355594    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:56.355638    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:56.355649    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:56.355658    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:56.355665    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:56.355676    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:56.355686    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:56.355693    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:56.355700    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:56.355707    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:56.355713    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:56.355720    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:56.355726    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:56.355733    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:56.355740    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:56.355766    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:56.355777    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:56.355785    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:56.355805    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:58.355868    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 18
	I0806 01:17:58.355883    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:58.355939    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:17:58.356729    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:17:58.356776    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:58.356787    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:58.356797    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:58.356809    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:58.356817    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:58.356823    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:58.356841    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:58.356854    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:58.356865    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:58.356874    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:58.356881    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:58.356889    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:58.356898    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:58.356906    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:58.356920    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:58.356935    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:58.356945    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:58.356953    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:00.358803    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 19
	I0806 01:18:00.358816    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:00.358886    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:00.359671    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:00.359707    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:00.359717    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:00.359742    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:00.359751    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:00.359760    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:00.359767    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:00.359774    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:00.359780    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:00.359786    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:00.359794    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:00.359805    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:00.359813    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:00.359820    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:00.359827    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:00.359836    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:00.359863    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:00.359874    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:00.359881    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:02.361402    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 20
	I0806 01:18:02.361415    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:02.361541    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:02.362526    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:02.362577    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:02.362588    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:02.362608    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:02.362617    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:02.362625    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:02.362635    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:02.362661    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:02.362676    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:02.362685    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:02.362694    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:02.362701    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:02.362709    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:02.362723    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:02.362734    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:02.362744    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:02.362760    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:02.362768    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:02.362774    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:04.364246    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 21
	I0806 01:18:04.364262    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:04.364357    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:04.365194    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:04.365242    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:04.365253    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:04.365265    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:04.365276    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:04.365301    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:04.365311    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:04.365319    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:04.365327    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:04.365344    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:04.365359    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:04.365368    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:04.365374    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:04.365381    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:04.365389    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:04.365401    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:04.365409    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:04.365417    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:04.365425    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:06.367431    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 22
	I0806 01:18:06.367451    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:06.367664    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:06.368403    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:06.368457    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:06.368466    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:06.368476    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:06.368483    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:06.368491    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:06.368496    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:06.368510    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:06.368529    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:06.368536    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:06.368564    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:06.368575    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:06.368591    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:06.368605    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:06.368613    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:06.368631    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:06.368644    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:06.368655    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:06.368665    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:08.368828    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 23
	I0806 01:18:08.368841    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:08.368944    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:08.369887    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:08.369916    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:08.369926    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:08.369959    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:08.369971    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:08.369982    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:08.370003    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:08.370013    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:08.370026    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:08.370034    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:08.370041    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:08.370050    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:08.370057    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:08.370069    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:08.370079    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:08.370086    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:08.370092    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:08.370098    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:08.370105    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:10.371833    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 24
	I0806 01:18:10.371849    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:10.371925    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:10.372714    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:10.372755    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:10.372763    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:10.372771    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:10.372776    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:10.372782    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:10.372788    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:10.372794    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:10.372819    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:10.372834    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:10.372843    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:10.372852    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:10.372857    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:10.372882    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:10.372890    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:10.372897    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:10.372911    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:10.372923    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:10.372934    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:12.374107    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 25
	I0806 01:18:12.374121    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:12.374159    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:12.374956    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:12.375000    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:12.375012    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:12.375020    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:12.375030    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:12.375052    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:12.375059    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:12.375066    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:12.375072    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:12.375079    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:12.375087    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:12.375095    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:12.375102    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:12.375109    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:12.375115    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:12.375124    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:12.375131    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:12.375139    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:12.375146    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:14.375459    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 26
	I0806 01:18:14.375483    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:14.375525    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:14.376359    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:14.376404    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:14.376415    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:14.376425    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:14.376430    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:14.376438    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:14.376444    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:14.376451    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:14.376465    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:14.376473    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:14.376479    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:14.376486    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:14.376492    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:14.376499    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:14.376506    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:14.376514    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:14.376521    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:14.376528    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:14.376535    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:16.378594    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 27
	I0806 01:18:16.378606    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:16.378664    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:16.379418    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:16.379476    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:16.379489    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:16.379496    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:16.379506    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:16.379513    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:16.379518    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:16.379525    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:16.379532    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:16.379545    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:16.379554    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:16.379563    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:16.379572    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:16.379579    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:16.379587    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:16.379594    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:16.379601    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:16.379608    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:16.379614    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:18.379658    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 28
	I0806 01:18:18.380121    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:18.380143    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:18.380746    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:18.380783    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:18.380804    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:18.380818    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:18.380829    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:18.380846    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:18.380875    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:18.380888    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:18.380949    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:18.380977    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:18.381007    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:18.381026    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:18.381039    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:18.381046    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:18.381077    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:18.381218    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:18.381232    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:18.381241    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:18.381250    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:20.381179    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Attempt 29
	I0806 01:18:20.381195    6760 main.go:141] libmachine: (docker-flags-346000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:18:20.381283    6760 main.go:141] libmachine: (docker-flags-346000) DBG | hyperkit pid from json: 6847
	I0806 01:18:20.382043    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Searching for 12:d2:aa:6f:3a:eb in /var/db/dhcpd_leases ...
	I0806 01:18:20.382096    6760 main.go:141] libmachine: (docker-flags-346000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:18:20.382109    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:18:20.382117    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:18:20.382123    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:18:20.382131    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:18:20.382137    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:18:20.382144    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:18:20.382152    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:18:20.382160    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:18:20.382166    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:18:20.382172    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:18:20.382180    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:18:20.382187    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:18:20.382194    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:18:20.382202    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:18:20.382209    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:18:20.382217    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:18:20.382225    6760 main.go:141] libmachine: (docker-flags-346000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:18:22.383586    6760 client.go:171] duration metric: took 1m1.09001183s to LocalClient.Create
	I0806 01:18:24.384033    6760 start.go:128] duration metric: took 1m3.121867344s to createHost
	I0806 01:18:24.384048    6760 start.go:83] releasing machines lock for "docker-flags-346000", held for 1m3.121982051s
	W0806 01:18:24.384117    6760 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-346000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:d2:aa:6f:3a:eb
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-346000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:d2:aa:6f:3a:eb
	I0806 01:18:24.427125    6760 out.go:177] 
	W0806 01:18:24.448314    6760 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:d2:aa:6f:3a:eb
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:d2:aa:6f:3a:eb
	W0806 01:18:24.448331    6760 out.go:239] * 
	* 
	W0806 01:18:24.449018    6760 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:18:24.511240    6760 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-346000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (177.61021ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-346000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-346000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-346000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (169.368208ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-346000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-346000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-346000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-06 01:18:24.965773 -0700 PDT m=+4474.129439146
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-346000 -n docker-flags-346000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-346000 -n docker-flags-346000: exit status 7 (99.546911ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:18:25.062887    6890 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:18:25.062913    6890 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-346000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-346000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-346000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-346000: (5.255600285s)
--- FAIL: TestDockerFlags (252.06s)

                                                
                                    
x
+
TestForceSystemdFlag (252.08s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-672000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E0806 01:13:22.434177    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-672000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.502416512s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-672000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-672000" primary control-plane node in "force-systemd-flag-672000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-672000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:13:14.998603    6729 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:13:14.998812    6729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:13:14.998818    6729 out.go:304] Setting ErrFile to fd 2...
	I0806 01:13:14.998822    6729 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:13:14.999004    6729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 01:13:15.000519    6729 out.go:298] Setting JSON to false
	I0806 01:13:15.024171    6729 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4357,"bootTime":1722927638,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 01:13:15.024278    6729 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:13:15.045654    6729 out.go:177] * [force-systemd-flag-672000] minikube v1.33.1 on Darwin 14.5
	I0806 01:13:15.088260    6729 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:13:15.088271    6729 notify.go:220] Checking for updates...
	I0806 01:13:15.130249    6729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:13:15.151161    6729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 01:13:15.172291    6729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:13:15.193228    6729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:13:15.214162    6729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:13:15.235812    6729 config.go:182] Loaded profile config "force-systemd-env-176000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:13:15.235909    6729 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:13:15.264388    6729 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 01:13:15.305325    6729 start.go:297] selected driver: hyperkit
	I0806 01:13:15.305339    6729 start.go:901] validating driver "hyperkit" against <nil>
	I0806 01:13:15.305350    6729 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:13:15.308448    6729 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:13:15.308566    6729 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 01:13:15.316976    6729 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 01:13:15.320943    6729 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:13:15.320974    6729 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 01:13:15.321011    6729 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:13:15.321205    6729 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 01:13:15.321261    6729 cni.go:84] Creating CNI manager for ""
	I0806 01:13:15.321277    6729 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:13:15.321290    6729 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:13:15.321351    6729 start.go:340] cluster config:
	{Name:force-systemd-flag-672000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-672000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:13:15.321443    6729 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:13:15.363136    6729 out.go:177] * Starting "force-systemd-flag-672000" primary control-plane node in "force-systemd-flag-672000" cluster
	I0806 01:13:15.384423    6729 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:13:15.384455    6729 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 01:13:15.384469    6729 cache.go:56] Caching tarball of preloaded images
	I0806 01:13:15.384584    6729 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 01:13:15.384593    6729 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:13:15.384678    6729 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/force-systemd-flag-672000/config.json ...
	I0806 01:13:15.384695    6729 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/force-systemd-flag-672000/config.json: {Name:mkea0112c6a18960ac08eaa680b2b052d3811683 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:13:15.385058    6729 start.go:360] acquireMachinesLock for force-systemd-flag-672000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:14:12.505280    6729 start.go:364] duration metric: took 57.119210346s to acquireMachinesLock for "force-systemd-flag-672000"
	I0806 01:14:12.505335    6729 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-672000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-672000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:14:12.505379    6729 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:14:12.526924    6729 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:14:12.527041    6729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:14:12.527077    6729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:14:12.535495    6729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53727
	I0806 01:14:12.535834    6729 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:14:12.536259    6729 main.go:141] libmachine: Using API Version  1
	I0806 01:14:12.536269    6729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:14:12.536481    6729 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:14:12.536666    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .GetMachineName
	I0806 01:14:12.536778    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .DriverName
	I0806 01:14:12.536894    6729 start.go:159] libmachine.API.Create for "force-systemd-flag-672000" (driver="hyperkit")
	I0806 01:14:12.536919    6729 client.go:168] LocalClient.Create starting
	I0806 01:14:12.536952    6729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:14:12.537005    6729 main.go:141] libmachine: Decoding PEM data...
	I0806 01:14:12.537022    6729 main.go:141] libmachine: Parsing certificate...
	I0806 01:14:12.537087    6729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:14:12.537126    6729 main.go:141] libmachine: Decoding PEM data...
	I0806 01:14:12.537137    6729 main.go:141] libmachine: Parsing certificate...
	I0806 01:14:12.537149    6729 main.go:141] libmachine: Running pre-create checks...
	I0806 01:14:12.537156    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .PreCreateCheck
	I0806 01:14:12.537261    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:12.537434    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .GetConfigRaw
	I0806 01:14:12.568969    6729 main.go:141] libmachine: Creating machine...
	I0806 01:14:12.568995    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .Create
	I0806 01:14:12.569094    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:12.569235    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:14:12.569097    6745 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:14:12.569312    6729 main.go:141] libmachine: (force-systemd-flag-672000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:14:12.987247    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:14:12.987180    6745 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/id_rsa...
	I0806 01:14:13.095130    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:14:13.095076    6745 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/force-systemd-flag-672000.rawdisk...
	I0806 01:14:13.095152    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Writing magic tar header
	I0806 01:14:13.095176    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Writing SSH key tar header
	I0806 01:14:13.095463    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:14:13.095420    6745 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000 ...
	I0806 01:14:13.516743    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:13.516773    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/hyperkit.pid
	I0806 01:14:13.516787    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Using UUID d05748e9-6062-43c7-9283-18acdbbdd71a
	I0806 01:14:13.542766    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Generated MAC 12:85:d1:e2:c4:b6
	I0806 01:14:13.542782    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-672000
	I0806 01:14:13.542814    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d05748e9-6062-43c7-9283-18acdbbdd71a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:14:13.542840    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"d05748e9-6062-43c7-9283-18acdbbdd71a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:14:13.542897    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "d05748e9-6062-43c7-9283-18acdbbdd71a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/force-systemd-flag-672000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-sy
stemd-flag-672000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-672000"}
	I0806 01:14:13.542935    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U d05748e9-6062-43c7-9283-18acdbbdd71a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/force-systemd-flag-672000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/bzimage,/Users/jenkins/minikube-integration/
19370-944/.minikube/machines/force-systemd-flag-672000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-672000"
	I0806 01:14:13.542977    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:14:13.545845    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 DEBUG: hyperkit: Pid is 6759
	I0806 01:14:13.546776    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 0
	I0806 01:14:13.546789    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:13.546848    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:13.547729    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:13.547794    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:13.547814    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:13.547846    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:13.547865    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:13.547897    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:13.547907    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:13.547915    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:13.547923    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:13.547935    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:13.547950    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:13.547958    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:13.547967    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:13.547978    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:13.547989    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:13.548014    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:13.548040    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:13.548105    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:13.548128    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:13.553827    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:14:13.561947    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:14:13.562900    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:14:13.562913    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:14:13.562920    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:14:13.562927    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:14:13.938004    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:14:13.938027    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:13 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:14:14.052658    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:14:14.052675    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:14:14.052688    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:14:14.052710    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:14:14.053569    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:14 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:14:14.053580    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:14 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:14:15.548540    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 1
	I0806 01:14:15.548553    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:15.548607    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:15.549396    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:15.549455    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:15.549470    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:15.549485    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:15.549496    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:15.549518    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:15.549529    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:15.549538    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:15.549547    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:15.549564    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:15.549576    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:15.549583    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:15.549592    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:15.549602    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:15.549610    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:15.549618    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:15.549626    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:15.549633    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:15.549640    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:17.550781    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 2
	I0806 01:14:17.550795    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:17.550862    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:17.551645    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:17.551710    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:17.551724    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:17.551737    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:17.551746    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:17.551757    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:17.551767    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:17.551775    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:17.551786    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:17.551793    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:17.551801    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:17.551816    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:17.551828    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:17.551845    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:17.551859    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:17.551868    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:17.551877    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:17.551886    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:17.551893    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:19.439019    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:19 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 01:14:19.439168    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:19 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 01:14:19.439178    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:19 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 01:14:19.458840    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:14:19 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 01:14:19.554059    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 3
	I0806 01:14:19.554087    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:19.554296    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:19.555697    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:19.555823    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:19.555843    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:19.555869    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:19.555893    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:19.555907    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:19.555937    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:19.555962    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:19.555994    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:19.556029    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:19.556055    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:19.556091    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:19.556101    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:19.556124    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:19.556161    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:19.556183    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:19.556202    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:19.556217    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:19.556228    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:21.556982    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 4
	I0806 01:14:21.556999    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:21.557056    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:21.557836    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:21.557901    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:21.557914    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:21.557931    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:21.557937    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:21.557946    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:21.557954    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:21.557967    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:21.557979    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:21.557988    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:21.558005    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:21.558030    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:21.558042    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:21.558050    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:21.558057    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:21.558071    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:21.558081    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:21.558091    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:21.558099    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:23.560243    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 5
	I0806 01:14:23.560259    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:23.560310    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:23.561243    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:23.561287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:23.561296    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:23.561305    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:23.561312    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:23.561319    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:23.561330    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:23.561338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:23.561346    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:23.561354    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:23.561362    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:23.561370    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:23.561379    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:23.561400    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:23.561414    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:23.561423    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:23.561431    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:23.561450    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:23.561461    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:25.562867    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 6
	I0806 01:14:25.562880    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:25.562989    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:25.563783    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:25.563824    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:25.563837    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:25.563847    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:25.563854    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:25.563861    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:25.563867    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:25.563874    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:25.563880    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:25.563886    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:25.563903    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:25.563914    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:25.563921    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:25.563930    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:25.563938    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:25.563945    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:25.563963    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:25.563978    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:25.563995    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:27.565100    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 7
	I0806 01:14:27.565115    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:27.565202    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:27.566232    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:27.566310    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:27.566336    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:27.566350    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:27.566358    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:27.566366    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:27.566373    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:27.566380    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:27.566388    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:27.566399    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:27.566407    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:27.566414    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:27.566421    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:27.566438    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:27.566453    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:27.566461    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:27.566469    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:27.566477    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:27.566493    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:29.568465    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 8
	I0806 01:14:29.568482    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:29.568539    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:29.569317    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:29.569335    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:29.569366    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:29.569386    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:29.569397    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:29.569405    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:29.569413    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:29.569420    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:29.569438    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:29.569451    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:29.569462    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:29.569470    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:29.569478    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:29.569485    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:29.569493    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:29.569500    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:29.569508    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:29.569517    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:29.569524    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:31.569627    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 9
	I0806 01:14:31.569641    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:31.569774    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:31.570550    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:31.570578    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:31.570597    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:31.570611    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:31.570621    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:31.570638    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:31.570646    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:31.570653    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:31.570659    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:31.570673    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:31.570681    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:31.570689    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:31.570695    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:31.570703    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:31.570712    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:31.570719    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:31.570726    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:31.570733    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:31.570741    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:33.571043    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 10
	I0806 01:14:33.571058    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:33.571135    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:33.571942    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:33.571982    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:33.571992    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:33.572002    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:33.572011    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:33.572024    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:33.572030    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:33.572038    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:33.572046    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:33.572063    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:33.572077    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:33.572085    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:33.572098    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:33.572107    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:33.572123    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:33.572135    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:33.572143    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:33.572150    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:33.572157    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:35.572535    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 11
	I0806 01:14:35.572565    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:35.572622    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:35.573538    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:35.573579    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:35.573592    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:35.573604    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:35.573610    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:35.573624    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:35.573636    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:35.573643    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:35.573653    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:35.573661    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:35.573666    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:35.573673    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:35.573687    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:35.573695    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:35.573701    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:35.573708    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:35.573714    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:35.573722    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:35.573731    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:37.575795    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 12
	I0806 01:14:37.575811    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:37.575860    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:37.576650    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:37.576691    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:37.576701    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:37.576710    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:37.576717    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:37.576724    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:37.576737    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:37.576744    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:37.576750    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:37.576765    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:37.576775    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:37.576783    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:37.576794    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:37.576814    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:37.576824    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:37.576832    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:37.576841    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:37.576848    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:37.576856    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:39.577751    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 13
	I0806 01:14:39.577768    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:39.577830    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:39.578588    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:39.578640    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:39.578651    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:39.578660    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:39.578667    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:39.578675    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:39.578680    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:39.578687    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:39.578693    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:39.578700    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:39.578714    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:39.578720    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:39.578736    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:39.578745    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:39.578753    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:39.578766    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:39.578773    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:39.578781    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:39.578789    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:41.579251    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 14
	I0806 01:14:41.579272    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:41.579385    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:41.580181    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:41.580228    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:41.580245    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:41.580261    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:41.580280    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:41.580290    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:41.580300    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:41.580307    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:41.580329    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:41.580338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:41.580346    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:41.580354    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:41.580361    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:41.580369    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:41.580376    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:41.580384    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:41.580397    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:41.580426    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:41.580457    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:43.582471    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 15
	I0806 01:14:43.582484    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:43.582518    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:43.583497    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:43.583524    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:43.583537    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:43.583554    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:43.583564    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:43.583572    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:43.583579    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:43.583585    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:43.583592    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:43.583598    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:43.583614    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:43.583627    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:43.583647    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:43.583654    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:43.583661    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:43.583673    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:43.583687    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:43.583698    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:43.583716    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:45.584382    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 16
	I0806 01:14:45.584397    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:45.584480    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:45.585219    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:45.585274    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:45.585287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:45.585295    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:45.585303    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:45.585310    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:45.585319    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:45.585334    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:45.585345    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:45.585353    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:45.585362    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:45.585377    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:45.585390    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:45.585399    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:45.585407    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:45.585415    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:45.585423    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:45.585431    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:45.585439    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:47.587574    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 17
	I0806 01:14:47.587588    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:47.587701    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:47.588494    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:47.588554    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:47.588568    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:47.588582    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:47.588591    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:47.588599    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:47.588616    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:47.588624    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:47.588632    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:47.588639    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:47.588645    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:47.588654    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:47.588662    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:47.588668    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:47.588674    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:47.588682    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:47.588689    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:47.588697    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:47.588706    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:49.589371    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 18
	I0806 01:14:49.589384    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:49.589522    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:49.590378    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:49.590427    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:49.590440    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:49.590453    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:49.590463    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:49.590470    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:49.590478    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:49.590485    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:49.590493    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:49.590509    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:49.590522    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:49.590530    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:49.590536    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:49.590546    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:49.590552    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:49.590560    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:49.590571    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:49.590580    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:49.590597    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:51.592302    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 19
	I0806 01:14:51.592317    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:51.592407    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:51.593344    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:51.593389    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:51.593400    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:51.593422    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:51.593428    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:51.593438    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:51.593446    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:51.593456    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:51.593464    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:51.593471    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:51.593478    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:51.593495    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:51.593509    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:51.593524    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:51.593533    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:51.593540    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:51.593548    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:51.593556    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:51.593564    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:53.595598    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 20
	I0806 01:14:53.595612    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:53.595719    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:53.596478    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:53.596536    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:53.596547    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:53.596557    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:53.596594    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:53.596613    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:53.596626    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:53.596634    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:53.596650    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:53.596665    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:53.596678    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:53.596687    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:53.596704    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:53.596711    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:53.596718    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:53.596738    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:53.596748    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:53.596764    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:53.596775    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:55.596984    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 21
	I0806 01:14:55.597002    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:55.597055    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:55.597822    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:55.597863    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:55.597876    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:55.597897    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:55.597906    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:55.597914    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:55.597921    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:55.597930    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:55.597937    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:55.597945    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:55.597953    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:55.597960    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:55.597967    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:55.597985    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:55.597996    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:55.598003    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:55.598010    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:55.598033    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:55.598046    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:57.598575    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 22
	I0806 01:14:57.598590    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:57.598663    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:57.599646    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:57.599700    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:57.599711    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:57.599725    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:57.599738    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:57.599749    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:57.599754    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:57.599762    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:57.599772    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:57.599788    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:57.599800    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:57.599808    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:57.599816    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:57.599832    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:57.599843    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:57.599852    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:57.599861    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:57.599871    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:57.599879    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:59.601826    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 23
	I0806 01:14:59.601839    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:59.601882    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:14:59.602750    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:14:59.602794    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:59.602812    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:59.602826    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:59.602837    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:59.602848    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:59.602858    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:59.602883    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:59.602899    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:59.602908    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:59.602916    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:59.602933    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:59.602946    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:59.602960    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:59.602967    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:59.602975    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:59.602983    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:59.602998    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:59.603011    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:01.605054    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 24
	I0806 01:15:01.605069    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:01.605117    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:01.606012    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:15:01.606046    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:01.606060    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:01.606072    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:01.606080    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:01.606087    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:01.606106    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:01.606115    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:01.606125    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:01.606133    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:01.606140    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:01.606146    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:01.606157    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:01.606164    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:01.606172    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:01.606179    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:01.606189    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:01.606197    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:01.606205    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:03.607224    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 25
	I0806 01:15:03.607237    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:03.607300    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:03.608217    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:15:03.608269    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:03.608287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:03.608300    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:03.608307    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:03.608319    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:03.608326    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:03.608332    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:03.608349    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:03.608367    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:03.608379    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:03.608401    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:03.608431    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:03.608440    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:03.608449    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:03.608457    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:03.608465    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:03.608471    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:03.608484    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:05.609709    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 26
	I0806 01:15:05.609721    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:05.609769    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:05.610572    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:15:05.610595    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:05.610611    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:05.610623    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:05.610635    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:05.610651    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:05.610662    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:05.610671    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:05.610680    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:05.610688    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:05.610696    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:05.610703    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:05.610709    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:05.610716    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:05.610724    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:05.610731    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:05.610739    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:05.610745    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:05.610762    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:07.611965    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 27
	I0806 01:15:07.611983    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:07.612045    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:07.613008    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:15:07.613051    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:07.613063    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:07.613073    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:07.613082    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:07.613098    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:07.613105    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:07.613112    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:07.613118    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:07.613124    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:07.613130    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:07.613136    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:07.613146    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:07.613153    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:07.613163    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:07.613170    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:07.613177    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:07.613190    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:07.613205    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:09.613280    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 28
	I0806 01:15:09.613309    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:09.613402    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:09.614161    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:15:09.614226    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:09.614236    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:09.614244    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:09.614250    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:09.614267    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:09.614290    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:09.614300    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:09.614310    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:09.614326    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:09.614339    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:09.614348    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:09.614353    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:09.614375    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:09.614387    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:09.614396    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:09.614412    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:09.614426    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:09.614436    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:11.615325    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 29
	I0806 01:15:11.615338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:11.615463    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:11.616238    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 12:85:d1:e2:c4:b6 in /var/db/dhcpd_leases ...
	I0806 01:15:11.616290    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:15:11.616299    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:15:11.616307    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:15:11.616314    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:15:11.616321    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:15:11.616327    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:15:11.616344    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:15:11.616353    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:15:11.616360    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:15:11.616368    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:15:11.616386    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:15:11.616397    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:15:11.616406    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:15:11.616415    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:15:11.616423    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:15:11.616430    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:15:11.616449    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:15:11.616457    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:15:13.618543    6729 client.go:171] duration metric: took 1m1.080547113s to LocalClient.Create
	I0806 01:15:15.619158    6729 start.go:128] duration metric: took 1m3.112661327s to createHost
	I0806 01:15:15.619175    6729 start.go:83] releasing machines lock for "force-systemd-flag-672000", held for 1m3.112782731s
	W0806 01:15:15.619202    6729 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:85:d1:e2:c4:b6
	I0806 01:15:15.619510    6729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:15:15.619546    6729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:15:15.628029    6729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53743
	I0806 01:15:15.628401    6729 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:15:15.628720    6729 main.go:141] libmachine: Using API Version  1
	I0806 01:15:15.628735    6729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:15:15.628990    6729 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:15:15.629377    6729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:15:15.629395    6729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:15:15.637700    6729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53745
	I0806 01:15:15.638026    6729 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:15:15.638413    6729 main.go:141] libmachine: Using API Version  1
	I0806 01:15:15.638433    6729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:15:15.638674    6729 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:15:15.638806    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .GetState
	I0806 01:15:15.638899    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:15.638968    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:15.639940    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .DriverName
	I0806 01:15:15.661268    6729 out.go:177] * Deleting "force-systemd-flag-672000" in hyperkit ...
	I0806 01:15:15.702465    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .Remove
	I0806 01:15:15.702584    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:15.702600    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:15.702663    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:15.703595    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:15.703651    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | waiting for graceful shutdown
	I0806 01:15:16.705795    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:16.705879    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:16.706768    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | waiting for graceful shutdown
	I0806 01:15:17.707619    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:17.707727    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:17.709402    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | waiting for graceful shutdown
	I0806 01:15:18.711208    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:18.711287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:18.712002    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | waiting for graceful shutdown
	I0806 01:15:19.712602    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:19.712725    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:19.713407    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | waiting for graceful shutdown
	I0806 01:15:20.715271    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:15:20.715346    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6759
	I0806 01:15:20.716329    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | sending sigkill
	I0806 01:15:20.716338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0806 01:15:20.727758    6729 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:85:d1:e2:c4:b6
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:85:d1:e2:c4:b6
	I0806 01:15:20.727776    6729 start.go:729] Will try again in 5 seconds ...
	I0806 01:15:20.739219    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:15:20 WARN : hyperkit: failed to read stdout: EOF
	I0806 01:15:20.739237    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:15:20 WARN : hyperkit: failed to read stderr: EOF
	I0806 01:15:25.729909    6729 start.go:360] acquireMachinesLock for force-systemd-flag-672000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:16:18.462168    6729 start.go:364] duration metric: took 52.731290295s to acquireMachinesLock for "force-systemd-flag-672000"
	I0806 01:16:18.462188    6729 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-672000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-672000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:16:18.462292    6729 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:16:18.483575    6729 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:16:18.483663    6729 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:16:18.483690    6729 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:16:18.492080    6729 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53753
	I0806 01:16:18.492397    6729 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:16:18.492744    6729 main.go:141] libmachine: Using API Version  1
	I0806 01:16:18.492761    6729 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:16:18.492986    6729 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:16:18.493104    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .GetMachineName
	I0806 01:16:18.493195    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .DriverName
	I0806 01:16:18.493345    6729 start.go:159] libmachine.API.Create for "force-systemd-flag-672000" (driver="hyperkit")
	I0806 01:16:18.493375    6729 client.go:168] LocalClient.Create starting
	I0806 01:16:18.493401    6729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:16:18.493451    6729 main.go:141] libmachine: Decoding PEM data...
	I0806 01:16:18.493461    6729 main.go:141] libmachine: Parsing certificate...
	I0806 01:16:18.493510    6729 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:16:18.493550    6729 main.go:141] libmachine: Decoding PEM data...
	I0806 01:16:18.493560    6729 main.go:141] libmachine: Parsing certificate...
	I0806 01:16:18.493572    6729 main.go:141] libmachine: Running pre-create checks...
	I0806 01:16:18.493578    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .PreCreateCheck
	I0806 01:16:18.493679    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:18.493704    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .GetConfigRaw
	I0806 01:16:18.525762    6729 main.go:141] libmachine: Creating machine...
	I0806 01:16:18.525771    6729 main.go:141] libmachine: (force-systemd-flag-672000) Calling .Create
	I0806 01:16:18.525867    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:18.526031    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:16:18.525865    6810 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:16:18.526092    6729 main.go:141] libmachine: (force-systemd-flag-672000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:16:18.728175    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:16:18.728096    6810 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/id_rsa...
	I0806 01:16:18.777177    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:16:18.777106    6810 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/force-systemd-flag-672000.rawdisk...
	I0806 01:16:18.777187    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Writing magic tar header
	I0806 01:16:18.777197    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Writing SSH key tar header
	I0806 01:16:18.777568    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | I0806 01:16:18.777525    6810 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000 ...
	I0806 01:16:19.156629    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:19.156648    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/hyperkit.pid
	I0806 01:16:19.156715    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Using UUID 3113d4f8-a6ea-48dd-9d09-dc5dc5aeba0c
	I0806 01:16:19.182006    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Generated MAC 36:1b:93:7d:33:4f
	I0806 01:16:19.182024    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-672000
	I0806 01:16:19.182053    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3113d4f8-a6ea-48dd-9d09-dc5dc5aeba0c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:16:19.182082    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3113d4f8-a6ea-48dd-9d09-dc5dc5aeba0c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]st
ring(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:16:19.182129    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3113d4f8-a6ea-48dd-9d09-dc5dc5aeba0c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/force-systemd-flag-672000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-sy
stemd-flag-672000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-672000"}
	I0806 01:16:19.182165    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3113d4f8-a6ea-48dd-9d09-dc5dc5aeba0c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/force-systemd-flag-672000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/bzimage,/Users/jenkins/minikube-integration/
19370-944/.minikube/machines/force-systemd-flag-672000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-672000"
	I0806 01:16:19.182180    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:16:19.185159    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 DEBUG: hyperkit: Pid is 6811
	I0806 01:16:19.185648    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 0
	I0806 01:16:19.185664    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:19.185746    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:19.186770    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:19.186795    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:19.186813    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:19.186845    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:19.186867    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:19.186884    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:19.186901    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:19.186914    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:19.186942    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:19.186957    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:19.186972    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:19.186983    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:19.186990    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:19.186997    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:19.187024    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:19.187045    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:19.187059    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:19.187072    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:19.187092    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:19.192678    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:16:19.200824    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-flag-672000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:16:19.201677    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:16:19.201705    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:16:19.201720    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:16:19.201730    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:16:19.578926    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:16:19.578938    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:16:19.693538    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:16:19.693556    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:16:19.693571    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:16:19.693613    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:16:19.694482    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:16:19.694496    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:19 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:16:21.187290    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 1
	I0806 01:16:21.187312    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:21.187371    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:21.188159    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:21.188206    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:21.188217    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:21.188240    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:21.188255    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:21.188262    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:21.188269    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:21.188275    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:21.188285    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:21.188310    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:21.188324    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:21.188331    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:21.188338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:21.188346    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:21.188354    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:21.188365    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:21.188373    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:21.188380    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:21.188388    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:23.189040    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 2
	I0806 01:16:23.189055    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:23.189195    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:23.190001    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:23.190031    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:23.190075    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:23.190087    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:23.190095    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:23.190101    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:23.190130    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:23.190143    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:23.190151    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:23.190160    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:23.190166    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:23.190175    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:23.190182    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:23.190188    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:23.190204    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:23.190213    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:23.190230    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:23.190243    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:23.190253    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:25.124356    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:25 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0806 01:16:25.124546    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:25 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0806 01:16:25.124583    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:25 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0806 01:16:25.145197    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | 2024/08/06 01:16:25 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0806 01:16:25.191309    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 3
	I0806 01:16:25.191354    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:25.191556    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:25.193001    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:25.193132    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:25.193155    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:25.193177    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:25.193192    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:25.193211    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:25.193224    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:25.193234    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:25.193245    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:25.193255    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:25.193266    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:25.193276    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:25.193287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:25.193296    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:25.193305    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:25.193315    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:25.193331    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:25.193343    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:25.193354    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:27.194778    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 4
	I0806 01:16:27.194797    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:27.194919    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:27.195727    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:27.195779    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:27.195794    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:27.195809    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:27.195821    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:27.195830    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:27.195840    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:27.195848    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:27.195855    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:27.195861    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:27.195868    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:27.195878    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:27.195885    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:27.195893    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:27.195900    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:27.195907    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:27.195915    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:27.195933    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:27.195946    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:29.197277    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 5
	I0806 01:16:29.197302    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:29.197384    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:29.198185    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:29.198239    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:29.198251    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:29.198273    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:29.198285    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:29.198293    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:29.198299    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:29.198305    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:29.198316    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:29.198325    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:29.198338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:29.198350    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:29.198371    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:29.198385    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:29.198395    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:29.198403    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:29.198415    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:29.198424    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:29.198435    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:31.200493    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 6
	I0806 01:16:31.200508    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:31.200555    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:31.201416    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:31.201459    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:31.201467    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:31.201486    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:31.201498    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:31.201513    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:31.201520    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:31.201537    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:31.201547    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:31.201557    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:31.201566    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:31.201577    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:31.201585    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:31.201593    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:31.201601    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:31.201608    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:31.201616    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:31.201625    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:31.201633    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:33.201661    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 7
	I0806 01:16:33.201675    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:33.201782    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:33.202671    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:33.202724    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:33.202737    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:33.202747    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:33.202766    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:33.202777    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:33.202786    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:33.202793    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:33.202801    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:33.202808    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:33.202815    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:33.202822    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:33.202827    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:33.202843    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:33.202856    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:33.202866    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:33.202872    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:33.202885    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:33.202898    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:35.202974    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 8
	I0806 01:16:35.202989    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:35.202999    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:35.203876    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:35.203925    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:35.203936    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:35.203952    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:35.203963    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:35.203979    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:35.203989    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:35.203999    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:35.204008    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:35.204016    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:35.204024    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:35.204031    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:35.204037    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:35.204044    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:35.204052    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:35.204059    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:35.204067    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:35.204074    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:35.204083    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:37.204349    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 9
	I0806 01:16:37.204366    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:37.204502    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:37.205314    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:37.205347    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:37.205357    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:37.205375    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:37.205390    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:37.205399    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:37.205407    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:37.205413    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:37.205421    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:37.205429    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:37.205436    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:37.205456    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:37.205471    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:37.205484    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:37.205503    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:37.205518    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:37.205525    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:37.205534    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:37.205551    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:39.205787    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 10
	I0806 01:16:39.205800    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:39.205907    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:39.206697    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:39.206752    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:39.206766    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:39.206780    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:39.206790    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:39.206797    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:39.206803    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:39.206814    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:39.206823    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:39.206830    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:39.206836    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:39.206842    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:39.206849    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:39.206855    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:39.206862    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:39.206877    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:39.206893    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:39.206901    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:39.206909    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:41.208973    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 11
	I0806 01:16:41.208987    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:41.209063    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:41.210115    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:41.210166    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:41.210178    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:41.210189    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:41.210196    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:41.210203    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:41.210209    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:41.210228    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:41.210239    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:41.210276    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:41.210291    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:41.210299    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:41.210307    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:41.210315    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:41.210323    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:41.210332    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:41.210338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:41.210347    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:41.210355    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:43.211401    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 12
	I0806 01:16:43.211415    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:43.211487    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:43.212324    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:43.212378    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:43.212388    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:43.212402    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:43.212415    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:43.212425    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:43.212432    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:43.212443    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:43.212452    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:43.212475    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:43.212488    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:43.212500    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:43.212507    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:43.212514    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:43.212528    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:43.212543    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:43.212552    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:43.212558    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:43.212564    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:45.212615    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 13
	I0806 01:16:45.212627    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:45.212712    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:45.213494    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:45.213562    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:45.213581    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:45.213591    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:45.213597    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:45.213607    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:45.213618    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:45.213693    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:45.213725    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:45.213744    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:45.213757    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:45.213775    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:45.213791    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:45.213801    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:45.213811    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:45.213818    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:45.213827    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:45.213834    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:45.213847    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:47.213779    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 14
	I0806 01:16:47.213795    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:47.213883    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:47.214721    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:47.214768    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:47.214780    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:47.214788    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:47.214798    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:47.214808    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:47.214814    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:47.214821    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:47.214829    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:47.214843    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:47.214851    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:47.214858    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:47.214866    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:47.214874    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:47.214881    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:47.214888    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:47.214896    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:47.214903    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:47.214928    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:49.215937    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 15
	I0806 01:16:49.215953    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:49.216056    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:49.216848    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:49.216893    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:49.216901    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:49.216915    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:49.216936    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:49.216958    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:49.216967    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:49.216975    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:49.216992    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:49.217005    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:49.217028    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:49.217041    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:49.217050    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:49.217058    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:49.217067    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:49.217075    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:49.217082    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:49.217088    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:49.217097    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:51.218940    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 16
	I0806 01:16:51.218956    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:51.218973    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:51.219915    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:51.219957    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:51.219966    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:51.219975    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:51.219982    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:51.219988    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:51.219994    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:51.220013    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:51.220024    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:51.220033    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:51.220041    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:51.220068    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:51.220082    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:51.220097    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:51.220111    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:51.220130    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:51.220140    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:51.220147    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:51.220154    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:53.222194    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 17
	I0806 01:16:53.222209    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:53.222280    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:53.223116    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:53.223172    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:53.223182    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:53.223192    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:53.223199    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:53.223205    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:53.223214    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:53.223221    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:53.223229    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:53.223235    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:53.223243    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:53.223250    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:53.223264    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:53.223277    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:53.223287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:53.223297    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:53.223304    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:53.223310    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:53.223325    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:55.225346    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 18
	I0806 01:16:55.225361    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:55.225447    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:55.226209    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:55.226259    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:55.226268    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:55.226287    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:55.226304    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:55.226313    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:55.226319    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:55.226327    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:55.226336    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:55.226344    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:55.226352    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:55.226364    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:55.226372    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:55.226382    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:55.226390    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:55.226396    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:55.226404    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:55.226412    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:55.226417    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:57.228465    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 19
	I0806 01:16:57.228479    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:57.228561    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:57.229376    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:57.229415    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:57.229423    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:57.229434    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:57.229440    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:57.229457    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:57.229468    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:57.229478    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:57.229486    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:57.229494    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:57.229501    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:57.229513    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:57.229520    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:57.229542    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:57.229558    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:57.229567    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:57.229574    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:57.229586    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:57.229594    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:16:59.231631    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 20
	I0806 01:16:59.231645    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:16:59.231713    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:16:59.232519    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:16:59.232533    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:16:59.232545    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:16:59.232553    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:16:59.232572    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:16:59.232585    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:16:59.232607    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:16:59.232621    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:16:59.232630    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:16:59.232638    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:16:59.232656    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:16:59.232669    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:16:59.232680    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:16:59.232689    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:16:59.232697    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:16:59.232705    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:16:59.232713    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:16:59.232720    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:16:59.232728    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:01.233215    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 21
	I0806 01:17:01.233227    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:01.233296    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:01.234139    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:01.234177    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:01.234186    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:01.234201    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:01.234208    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:01.234215    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:01.234222    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:01.234229    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:01.234238    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:01.234245    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:01.234253    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:01.234262    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:01.234270    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:01.234285    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:01.234297    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:01.234305    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:01.234314    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:01.234321    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:01.234333    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:03.235593    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 22
	I0806 01:17:03.235608    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:03.235689    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:03.236541    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:03.236579    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:03.236594    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:03.236613    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:03.236635    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:03.236646    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:03.236653    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:03.236662    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:03.236669    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:03.236676    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:03.236692    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:03.236705    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:03.236713    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:03.236721    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:03.236728    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:03.236736    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:03.236744    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:03.236753    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:03.236768    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:05.238870    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 23
	I0806 01:17:05.238884    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:05.238994    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:05.239836    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:05.239868    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:05.239875    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:05.239885    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:05.239891    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:05.239906    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:05.239914    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:05.239922    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:05.239930    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:05.239946    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:05.239957    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:05.239965    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:05.239979    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:05.239988    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:05.239996    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:05.240005    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:05.240017    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:05.240030    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:05.240041    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:07.242086    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 24
	I0806 01:17:07.242100    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:07.242173    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:07.242972    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:07.243017    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:07.243027    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:07.243035    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:07.243044    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:07.243052    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:07.243062    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:07.243068    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:07.243076    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:07.243082    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:07.243090    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:07.243107    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:07.243119    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:07.243126    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:07.243140    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:07.243157    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:07.243169    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:07.243177    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:07.243184    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:09.245294    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 25
	I0806 01:17:09.245308    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:09.245392    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:09.246264    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:09.246308    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:09.246321    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:09.246332    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:09.246338    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:09.246345    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:09.246351    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:09.246364    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:09.246372    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:09.246390    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:09.246402    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:09.246410    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:09.246416    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:09.246432    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:09.246445    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:09.246455    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:09.246466    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:09.246478    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:09.246487    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:11.248501    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 26
	I0806 01:17:11.248516    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:11.248592    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:11.249476    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:11.249523    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:11.249533    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:11.249543    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:11.249551    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:11.249578    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:11.249586    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:11.249601    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:11.249617    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:11.249632    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:11.249640    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:11.249646    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:11.249662    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:11.249677    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:11.249691    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:11.249705    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:11.249716    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:11.249724    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:11.249733    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:13.250080    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 27
	I0806 01:17:13.250093    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:13.250242    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:13.251077    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:13.251132    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:13.251143    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:13.251153    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:13.251162    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:13.251169    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:13.251175    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:13.251181    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:13.251205    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:13.251217    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:13.251225    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:13.251234    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:13.251241    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:13.251249    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:13.251258    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:13.251266    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:13.251281    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:13.251295    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:13.251314    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:15.253324    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 28
	I0806 01:17:15.253735    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:15.253847    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:15.254235    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:15.254302    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:15.254315    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:15.254330    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:15.254337    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:15.254407    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:15.254433    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:15.254447    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:15.254453    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:15.254462    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:15.254475    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:15.254557    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:15.254586    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:15.254616    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:15.254628    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:15.254641    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:15.254649    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:15.254679    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:15.254699    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:17.255366    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Attempt 29
	I0806 01:17:17.255412    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:17:17.255532    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | hyperkit pid from json: 6811
	I0806 01:17:17.256387    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Searching for 36:1b:93:7d:33:4f in /var/db/dhcpd_leases ...
	I0806 01:17:17.256409    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:17:17.256418    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:17:17.256432    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:17:17.256439    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:17:17.256446    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:17:17.256453    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:17:17.256467    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:17:17.256476    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:17:17.256485    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:17:17.256491    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:17:17.256509    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:17:17.256527    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:17:17.256536    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:17:17.256545    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:17:17.256552    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:17:17.256560    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:17:17.256569    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:17:17.256577    6729 main.go:141] libmachine: (force-systemd-flag-672000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:17:19.258732    6729 client.go:171] duration metric: took 1m0.764288914s to LocalClient.Create
	I0806 01:17:21.260851    6729 start.go:128] duration metric: took 1m2.797454832s to createHost
	I0806 01:17:21.260905    6729 start.go:83] releasing machines lock for "force-systemd-flag-672000", held for 1m2.797630703s
	W0806 01:17:21.260979    6729 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-672000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:1b:93:7d:33:4f
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-672000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:1b:93:7d:33:4f
	I0806 01:17:21.324010    6729 out.go:177] 
	W0806 01:17:21.345108    6729 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:1b:93:7d:33:4f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:1b:93:7d:33:4f
	W0806 01:17:21.345124    6729 out.go:239] * 
	* 
	W0806 01:17:21.345779    6729 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:17:21.408094    6729 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-672000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-672000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-672000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (180.520475ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-672000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-672000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-06 01:17:21.699247 -0700 PDT m=+4410.864016276
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-672000 -n force-systemd-flag-672000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-672000 -n force-systemd-flag-672000: exit status 7 (80.129117ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:17:21.777358    6836 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:17:21.777383    6836 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-672000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-672000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-672000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-672000: (5.250720609s)
--- FAIL: TestForceSystemdFlag (252.08s)

                                                
                                    
x
+
TestForceSystemdEnv (233.59s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-176000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0806 01:12:24.548671    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:12:41.493570    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-176000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.025109957s)

                                                
                                                
-- stdout --
	* [force-systemd-env-176000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-176000" primary control-plane node in "force-systemd-env-176000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-176000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 01:10:24.719503    6653 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:10:24.719773    6653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:10:24.719778    6653 out.go:304] Setting ErrFile to fd 2...
	I0806 01:10:24.719782    6653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:10:24.719946    6653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 01:10:24.721515    6653 out.go:298] Setting JSON to false
	I0806 01:10:24.743921    6653 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4186,"bootTime":1722927638,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 01:10:24.744004    6653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:10:24.765777    6653 out.go:177] * [force-systemd-env-176000] minikube v1.33.1 on Darwin 14.5
	I0806 01:10:24.807375    6653 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:10:24.807411    6653 notify.go:220] Checking for updates...
	I0806 01:10:24.849195    6653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:10:24.891265    6653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 01:10:24.912191    6653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:10:24.934366    6653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:10:24.955354    6653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0806 01:10:24.976512    6653 config.go:182] Loaded profile config "offline-docker-733000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:10:24.976591    6653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:10:25.005358    6653 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 01:10:25.047083    6653 start.go:297] selected driver: hyperkit
	I0806 01:10:25.047092    6653 start.go:901] validating driver "hyperkit" against <nil>
	I0806 01:10:25.047101    6653 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:10:25.049894    6653 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:10:25.050003    6653 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 01:10:25.058333    6653 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 01:10:25.062162    6653 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:10:25.062186    6653 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 01:10:25.062222    6653 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:10:25.062427    6653 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 01:10:25.062449    6653 cni.go:84] Creating CNI manager for ""
	I0806 01:10:25.062483    6653 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 01:10:25.062490    6653 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 01:10:25.062549    6653 start.go:340] cluster config:
	{Name:force-systemd-env-176000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-176000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:10:25.062632    6653 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:10:25.104296    6653 out.go:177] * Starting "force-systemd-env-176000" primary control-plane node in "force-systemd-env-176000" cluster
	I0806 01:10:25.146294    6653 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:10:25.146321    6653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 01:10:25.146337    6653 cache.go:56] Caching tarball of preloaded images
	I0806 01:10:25.146439    6653 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 01:10:25.146453    6653 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:10:25.146525    6653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/force-systemd-env-176000/config.json ...
	I0806 01:10:25.146546    6653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/force-systemd-env-176000/config.json: {Name:mk0d104d30519d1f3783ad6e90521b6a9c15b42a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:10:25.167552    6653 start.go:360] acquireMachinesLock for force-systemd-env-176000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:11:03.672460    6653 start.go:364] duration metric: took 38.504218672s to acquireMachinesLock for "force-systemd-env-176000"
	I0806 01:11:03.672500    6653 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-176000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:11:03.672565    6653 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:11:03.693950    6653 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:11:03.694103    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:11:03.694152    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:11:03.702713    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53706
	I0806 01:11:03.703095    6653 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:11:03.703678    6653 main.go:141] libmachine: Using API Version  1
	I0806 01:11:03.703709    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:11:03.703937    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:11:03.704050    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .GetMachineName
	I0806 01:11:03.704133    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .DriverName
	I0806 01:11:03.704247    6653 start.go:159] libmachine.API.Create for "force-systemd-env-176000" (driver="hyperkit")
	I0806 01:11:03.704268    6653 client.go:168] LocalClient.Create starting
	I0806 01:11:03.704309    6653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:11:03.704360    6653 main.go:141] libmachine: Decoding PEM data...
	I0806 01:11:03.704377    6653 main.go:141] libmachine: Parsing certificate...
	I0806 01:11:03.704432    6653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:11:03.704470    6653 main.go:141] libmachine: Decoding PEM data...
	I0806 01:11:03.704478    6653 main.go:141] libmachine: Parsing certificate...
	I0806 01:11:03.704498    6653 main.go:141] libmachine: Running pre-create checks...
	I0806 01:11:03.704507    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .PreCreateCheck
	I0806 01:11:03.704586    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:03.704778    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .GetConfigRaw
	I0806 01:11:03.735883    6653 main.go:141] libmachine: Creating machine...
	I0806 01:11:03.735892    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .Create
	I0806 01:11:03.735976    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:03.736095    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:11:03.735970    6669 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:11:03.736145    6653 main.go:141] libmachine: (force-systemd-env-176000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:11:03.958073    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:11:03.957975    6669 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/id_rsa...
	I0806 01:11:04.033335    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:11:04.033259    6669 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/force-systemd-env-176000.rawdisk...
	I0806 01:11:04.033345    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Writing magic tar header
	I0806 01:11:04.033354    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Writing SSH key tar header
	I0806 01:11:04.033876    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:11:04.033840    6669 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000 ...
	I0806 01:11:04.410766    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:04.410797    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/hyperkit.pid
	I0806 01:11:04.410810    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Using UUID 2cae2b6d-8779-4227-b72b-853effafd120
	I0806 01:11:04.437842    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Generated MAC 9e:d9:da:b5:63:1e
	I0806 01:11:04.437862    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-176000
	I0806 01:11:04.437918    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2cae2b6d-8779-4227-b72b-853effafd120", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:11:04.437954    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2cae2b6d-8779-4227-b72b-853effafd120", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:11:04.437997    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2cae2b6d-8779-4227-b72b-853effafd120", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/force-systemd-env-176000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-e
nv-176000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-176000"}
	I0806 01:11:04.438038    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2cae2b6d-8779-4227-b72b-853effafd120 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/force-systemd-env-176000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/bzimage,/Users/jenkins/minikube-integration/19370-94
4/.minikube/machines/force-systemd-env-176000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-176000"
	I0806 01:11:04.438050    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:11:04.441041    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 DEBUG: hyperkit: Pid is 6670
	I0806 01:11:04.442204    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 0
	I0806 01:11:04.442225    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:04.442280    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:04.443212    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:04.443279    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:04.443289    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:04.443300    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:04.443309    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:04.443316    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:04.443323    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:04.443348    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:04.443363    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:04.443380    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:04.443414    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:04.443451    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:04.443468    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:04.443482    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:04.443496    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:04.443510    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:04.443524    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:04.443538    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:04.443557    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:04.448659    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:11:04.456654    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:11:04.457530    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:11:04.457564    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:11:04.457579    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:11:04.457590    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:11:04.834222    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:11:04.834251    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:11:04.948813    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:11:04.948847    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:11:04.948864    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:11:04.948881    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:11:04.949719    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:11:04.949730    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:04 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:11:06.444205    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 1
	I0806 01:11:06.444220    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:06.444273    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:06.445086    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:06.445109    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:06.445124    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:06.445138    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:06.445155    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:06.445167    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:06.445188    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:06.445206    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:06.445215    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:06.445224    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:06.445242    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:06.445251    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:06.445259    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:06.445268    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:06.445275    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:06.445284    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:06.445292    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:06.445300    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:06.445309    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:08.445744    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 2
	I0806 01:11:08.445765    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:08.445840    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:08.446638    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:08.446713    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:08.446724    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:08.446733    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:08.446743    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:08.446751    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:08.446759    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:08.446765    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:08.446777    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:08.446786    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:08.446793    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:08.446801    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:08.446809    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:08.446816    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:08.446824    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:08.446833    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:08.446850    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:08.446863    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:08.446875    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:10.351225    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:10 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0806 01:11:10.351430    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:10 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0806 01:11:10.351462    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:10 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0806 01:11:10.372614    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:11:10 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0806 01:11:10.447606    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 3
	I0806 01:11:10.447637    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:10.447854    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:10.449255    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:10.449370    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:10.449395    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:10.449412    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:10.449423    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:10.449436    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:10.449455    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:10.449477    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:10.449489    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:10.449504    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:10.449516    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:10.449568    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:10.449585    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:10.449595    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:10.449612    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:10.449623    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:10.449635    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:10.449651    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:10.449665    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:12.450678    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 4
	I0806 01:11:12.450693    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:12.450817    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:12.451578    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:12.451629    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:12.451638    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:12.451650    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:12.451660    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:12.451670    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:12.451678    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:12.451684    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:12.451693    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:12.451700    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:12.451710    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:12.451717    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:12.451725    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:12.451767    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:12.451780    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:12.451797    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:12.451804    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:12.451812    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:12.451822    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:14.452163    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 5
	I0806 01:11:14.452176    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:14.452234    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:14.453111    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:14.453156    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:14.453166    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:14.453192    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:14.453203    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:14.453210    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:14.453217    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:14.453239    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:14.453251    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:14.453261    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:14.453275    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:14.453285    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:14.453293    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:14.453302    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:14.453308    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:14.453315    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:14.453323    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:14.453346    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:14.453360    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:16.453712    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 6
	I0806 01:11:16.453727    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:16.453787    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:16.454607    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:16.454643    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:16.454654    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:16.454673    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:16.454684    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:16.454692    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:16.454701    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:16.454710    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:16.454717    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:16.454726    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:16.454744    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:16.454756    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:16.454769    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:16.454777    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:16.454784    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:16.454794    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:16.454812    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:16.454820    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:16.454832    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:18.456377    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 7
	I0806 01:11:18.456393    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:18.456492    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:18.457265    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:18.457301    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:18.457314    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:18.457331    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:18.457350    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:18.457359    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:18.457370    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:18.457382    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:18.457390    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:18.457398    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:18.457406    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:18.457420    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:18.457428    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:18.457437    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:18.457445    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:18.457463    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:18.457476    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:18.457490    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:18.457499    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:20.459487    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 8
	I0806 01:11:20.459503    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:20.459577    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:20.460337    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:20.460383    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:20.460391    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:20.460400    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:20.460408    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:20.460415    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:20.460423    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:20.460439    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:20.460455    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:20.460464    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:20.460472    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:20.460487    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:20.460493    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:20.460500    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:20.460508    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:20.460515    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:20.460522    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:20.460530    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:20.460538    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:22.462492    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 9
	I0806 01:11:22.462508    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:22.462575    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:22.463355    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:22.463402    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:22.463418    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:22.463448    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:22.463464    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:22.463479    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:22.463491    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:22.463509    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:22.463521    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:22.463534    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:22.463547    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:22.463555    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:22.463563    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:22.463579    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:22.463592    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:22.463619    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:22.463631    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:22.463644    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:22.463657    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:24.465576    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 10
	I0806 01:11:24.465591    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:24.465673    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:24.466753    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:24.466813    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:24.466826    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:24.466834    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:24.466844    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:24.466853    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:24.466861    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:24.466880    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:24.466892    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:24.466901    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:24.466909    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:24.466917    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:24.466923    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:24.466930    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:24.466938    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:24.466955    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:24.466963    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:24.466971    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:24.466979    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:26.468178    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 11
	I0806 01:11:26.468208    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:26.468316    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:26.469134    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:26.469184    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:26.469196    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:26.469223    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:26.469235    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:26.469247    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:26.469257    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:26.469264    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:26.469272    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:26.469280    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:26.469296    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:26.469311    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:26.469324    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:26.469337    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:26.469350    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:26.469359    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:26.469367    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:26.469383    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:26.469391    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:28.471416    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 12
	I0806 01:11:28.471432    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:28.471468    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:28.472232    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:28.472277    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:28.472285    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:28.472298    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:28.472308    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:28.472316    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:28.472324    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:28.472334    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:28.472351    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:28.472363    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:28.472373    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:28.472380    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:28.472388    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:28.472397    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:28.472411    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:28.472424    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:28.472433    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:28.472450    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:28.472465    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:30.473829    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 13
	I0806 01:11:30.473855    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:30.473869    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:30.474943    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:30.474987    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:30.475001    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:30.475011    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:30.475017    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:30.475025    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:30.475031    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:30.475054    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:30.475078    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:30.475102    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:30.475114    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:30.475133    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:30.475144    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:30.475153    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:30.475161    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:30.475171    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:30.475179    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:30.475187    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:30.475201    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:32.476496    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 14
	I0806 01:11:32.476511    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:32.476629    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:32.477439    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:32.477484    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:32.477498    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:32.477535    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:32.477552    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:32.477563    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:32.477579    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:32.477588    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:32.477597    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:32.477613    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:32.477628    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:32.477638    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:32.477645    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:32.477652    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:32.477661    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:32.477672    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:32.477679    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:32.477687    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:32.477702    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:34.477717    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 15
	I0806 01:11:34.477745    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:34.477828    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:34.478616    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:34.478671    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:34.478681    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:34.478690    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:34.478698    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:34.478706    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:34.478712    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:34.478718    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:34.478728    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:34.478734    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:34.478740    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:34.478757    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:34.478768    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:34.478776    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:34.478784    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:34.478791    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:34.478799    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:34.478807    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:34.478815    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:36.480343    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 16
	I0806 01:11:36.480359    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:36.480421    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:36.481242    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:36.481291    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:36.481316    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:36.481341    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:36.481356    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:36.481366    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:36.481382    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:36.481393    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:36.481402    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:36.481410    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:36.481420    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:36.481427    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:36.481435    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:36.481443    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:36.481453    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:36.481460    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:36.481476    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:36.481489    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:36.481502    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:38.483502    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 17
	I0806 01:11:38.483514    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:38.483574    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:38.484360    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:38.484382    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:38.484390    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:38.484401    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:38.484409    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:38.484415    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:38.484433    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:38.484442    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:38.484450    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:38.484457    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:38.484477    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:38.484490    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:38.484498    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:38.484507    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:38.484514    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:38.484522    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:38.484529    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:38.484537    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:38.484546    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:40.484642    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 18
	I0806 01:11:40.484658    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:40.484720    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:40.485483    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:40.485527    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:40.485537    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:40.485551    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:40.485565    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:40.485574    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:40.485581    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:40.485592    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:40.485601    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:40.485611    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:40.485619    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:40.485627    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:40.485635    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:40.485643    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:40.485656    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:40.485671    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:40.485685    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:40.485693    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:40.485702    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:42.487760    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 19
	I0806 01:11:42.487773    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:42.487783    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:42.488629    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:42.488654    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:42.488664    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:42.488672    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:42.488678    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:42.488684    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:42.488690    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:42.488702    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:42.488709    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:42.488718    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:42.488726    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:42.488733    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:42.488742    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:42.488749    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:42.488758    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:42.488765    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:42.488773    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:42.488789    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:42.488802    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:44.490837    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 20
	I0806 01:11:44.490850    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:44.490941    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:44.491724    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:44.491773    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:44.491783    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:44.491801    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:44.491808    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:44.491823    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:44.491838    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:44.491847    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:44.491853    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:44.491867    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:44.491875    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:44.491883    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:44.491891    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:44.491898    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:44.491908    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:44.491927    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:44.491939    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:44.491948    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:44.491957    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:46.493425    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 21
	I0806 01:11:46.493440    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:46.493521    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:46.494297    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:46.494354    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:46.494366    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:46.494373    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:46.494381    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:46.494401    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:46.494413    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:46.494434    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:46.494446    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:46.494457    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:46.494468    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:46.494477    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:46.494486    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:46.494501    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:46.494509    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:46.494517    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:46.494524    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:46.494531    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:46.494539    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:48.494804    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 22
	I0806 01:11:48.494818    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:48.494898    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:48.495754    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:48.495807    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:48.495818    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:48.495835    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:48.495845    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:48.495854    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:48.495862    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:48.495882    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:48.495902    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:48.495917    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:48.495927    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:48.495935    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:48.495945    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:48.495952    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:48.495958    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:48.495980    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:48.495993    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:48.496001    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:48.496010    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:50.497258    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 23
	I0806 01:11:50.497273    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:50.497326    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:50.498099    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:50.498152    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:50.498163    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:50.498182    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:50.498190    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:50.498208    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:50.498220    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:50.498229    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:50.498238    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:50.498254    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:50.498266    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:50.498280    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:50.498288    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:50.498297    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:50.498306    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:50.498314    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:50.498321    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:50.498332    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:50.498342    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:52.500350    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 24
	I0806 01:11:52.500363    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:52.500394    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:52.501605    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:52.501615    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:52.501624    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:52.501632    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:52.501640    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:52.501652    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:52.501662    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:52.501670    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:52.501681    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:52.501689    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:52.501696    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:52.501704    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:52.501720    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:52.501733    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:52.501741    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:52.501749    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:52.501757    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:52.501765    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:52.501775    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:54.503205    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 25
	I0806 01:11:54.503221    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:54.503316    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:54.504125    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:54.504171    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:54.504180    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:54.504204    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:54.504221    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:54.504237    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:54.504246    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:54.504253    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:54.504261    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:54.504267    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:54.504274    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:54.504283    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:54.504289    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:54.504297    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:54.504314    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:54.504328    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:54.504336    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:54.504345    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:54.504353    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:56.506353    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 26
	I0806 01:11:56.506369    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:56.506482    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:56.507282    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:56.507328    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:56.507340    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:56.507348    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:56.507355    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:56.507363    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:56.507369    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:56.507396    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:56.507412    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:56.507422    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:56.507431    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:56.507437    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:56.507444    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:56.507452    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:56.507483    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:56.507508    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:56.507515    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:56.507547    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:56.507563    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:11:58.509376    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 27
	I0806 01:11:58.509390    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:11:58.509490    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:11:58.510303    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:11:58.510340    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:11:58.510349    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:11:58.510362    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:11:58.510372    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:11:58.510389    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:11:58.510402    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:11:58.510411    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:11:58.510424    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:11:58.510433    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:11:58.510439    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:11:58.510453    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:11:58.510471    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:11:58.510479    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:11:58.510487    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:11:58.510501    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:11:58.510515    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:11:58.510524    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:11:58.510530    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:00.511340    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 28
	I0806 01:12:00.511353    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:00.511393    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:00.512206    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:12:00.512259    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:00.512270    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:00.512279    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:00.512288    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:00.512295    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:00.512302    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:00.512320    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:00.512328    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:00.512336    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:00.512345    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:00.512363    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:00.512370    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:00.512378    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:00.512392    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:00.512404    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:00.512412    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:00.512418    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:00.512449    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:02.512941    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 29
	I0806 01:12:02.512956    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:02.513034    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:02.513911    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 9e:d9:da:b5:63:1e in /var/db/dhcpd_leases ...
	I0806 01:12:02.513956    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:12:02.513967    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:12:02.513978    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:12:02.513990    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:12:02.514018    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:12:02.514030    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:12:02.514038    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:12:02.514046    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:12:02.514054    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:12:02.514061    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:12:02.514074    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:12:02.514081    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:12:02.514087    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:12:02.514095    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:12:02.514102    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:12:02.514108    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:12:02.514114    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:12:02.514122    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:12:04.515549    6653 client.go:171] duration metric: took 1m0.810209846s to LocalClient.Create
	I0806 01:12:06.517077    6653 start.go:128] duration metric: took 1m2.843403376s to createHost
	I0806 01:12:06.517091    6653 start.go:83] releasing machines lock for "force-systemd-env-176000", held for 1m2.843524334s
	W0806 01:12:06.517128    6653 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:d9:da:b5:63:1e
	I0806 01:12:06.517442    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:12:06.517465    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:12:06.526089    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53709
	I0806 01:12:06.526431    6653 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:12:06.526782    6653 main.go:141] libmachine: Using API Version  1
	I0806 01:12:06.526793    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:12:06.526997    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:12:06.527357    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:12:06.527383    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:12:06.535749    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53711
	I0806 01:12:06.536069    6653 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:12:06.536415    6653 main.go:141] libmachine: Using API Version  1
	I0806 01:12:06.536431    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:12:06.536635    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:12:06.536751    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .GetState
	I0806 01:12:06.536851    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:06.536918    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:06.537916    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .DriverName
	I0806 01:12:06.580446    6653 out.go:177] * Deleting "force-systemd-env-176000" in hyperkit ...
	I0806 01:12:06.622260    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .Remove
	I0806 01:12:06.622397    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:06.622408    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:06.622465    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:06.623399    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:06.623453    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | waiting for graceful shutdown
	I0806 01:12:07.625597    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:07.625733    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:07.626648    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | waiting for graceful shutdown
	I0806 01:12:08.628020    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:08.628130    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:08.629973    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | waiting for graceful shutdown
	I0806 01:12:09.631288    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:09.631358    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:09.632088    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | waiting for graceful shutdown
	I0806 01:12:10.633530    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:10.633636    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:10.634214    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | waiting for graceful shutdown
	I0806 01:12:11.635466    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:11.635547    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6670
	I0806 01:12:11.636700    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | sending sigkill
	I0806 01:12:11.636713    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:12:11.648759    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:12:11 WARN : hyperkit: failed to read stderr: EOF
	I0806 01:12:11.648784    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:12:11 WARN : hyperkit: failed to read stdout: EOF
	W0806 01:12:11.666500    6653 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:d9:da:b5:63:1e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:d9:da:b5:63:1e
	I0806 01:12:11.666519    6653 start.go:729] Will try again in 5 seconds ...
	I0806 01:12:16.668684    6653 start.go:360] acquireMachinesLock for force-systemd-env-176000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:13:09.379160    6653 start.go:364] duration metric: took 52.709513252s to acquireMachinesLock for "force-systemd-env-176000"
	I0806 01:13:09.379198    6653 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-176000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-176000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:13:09.379245    6653 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:13:09.421341    6653 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0806 01:13:09.421425    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:13:09.421452    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:13:09.430082    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53715
	I0806 01:13:09.430420    6653 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:13:09.430764    6653 main.go:141] libmachine: Using API Version  1
	I0806 01:13:09.430777    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:13:09.431037    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:13:09.431175    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .GetMachineName
	I0806 01:13:09.431273    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .DriverName
	I0806 01:13:09.431394    6653 start.go:159] libmachine.API.Create for "force-systemd-env-176000" (driver="hyperkit")
	I0806 01:13:09.431437    6653 client.go:168] LocalClient.Create starting
	I0806 01:13:09.431465    6653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:13:09.431519    6653 main.go:141] libmachine: Decoding PEM data...
	I0806 01:13:09.431535    6653 main.go:141] libmachine: Parsing certificate...
	I0806 01:13:09.431579    6653 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:13:09.431617    6653 main.go:141] libmachine: Decoding PEM data...
	I0806 01:13:09.431629    6653 main.go:141] libmachine: Parsing certificate...
	I0806 01:13:09.431641    6653 main.go:141] libmachine: Running pre-create checks...
	I0806 01:13:09.431647    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .PreCreateCheck
	I0806 01:13:09.431726    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:09.431798    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .GetConfigRaw
	I0806 01:13:09.442657    6653 main.go:141] libmachine: Creating machine...
	I0806 01:13:09.442669    6653 main.go:141] libmachine: (force-systemd-env-176000) Calling .Create
	I0806 01:13:09.442768    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:09.442893    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:13:09.442761    6716 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:13:09.442944    6653 main.go:141] libmachine: (force-systemd-env-176000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:13:09.779732    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:13:09.779673    6716 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/id_rsa...
	I0806 01:13:10.032368    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:13:10.032272    6716 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/force-systemd-env-176000.rawdisk...
	I0806 01:13:10.032381    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Writing magic tar header
	I0806 01:13:10.032393    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Writing SSH key tar header
	I0806 01:13:10.032943    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | I0806 01:13:10.032902    6716 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000 ...
	I0806 01:13:10.406105    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:10.406125    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/hyperkit.pid
	I0806 01:13:10.406135    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Using UUID fd43d55a-98d3-4b05-8af1-bb26140a190c
	I0806 01:13:10.430843    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Generated MAC 82:54:6d:e6:8d:55
	I0806 01:13:10.430872    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-176000
	I0806 01:13:10.430940    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fd43d55a-98d3-4b05-8af1-bb26140a190c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:13:10.430977    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fd43d55a-98d3-4b05-8af1-bb26140a190c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(
nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:13:10.431049    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fd43d55a-98d3-4b05-8af1-bb26140a190c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/force-systemd-env-176000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-e
nv-176000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-176000"}
	I0806 01:13:10.431088    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fd43d55a-98d3-4b05-8af1-bb26140a190c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/force-systemd-env-176000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/bzimage,/Users/jenkins/minikube-integration/19370-94
4/.minikube/machines/force-systemd-env-176000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-176000"
	I0806 01:13:10.431119    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:13:10.434053    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 DEBUG: hyperkit: Pid is 6726
	I0806 01:13:10.434554    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 0
	I0806 01:13:10.434571    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:10.434641    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:10.435555    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:10.435602    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:10.435613    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:10.435629    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:10.435640    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:10.435654    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:10.435661    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:10.435702    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:10.435720    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:10.435729    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:10.435737    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:10.435748    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:10.435758    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:10.435766    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:10.435790    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:10.435814    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:10.435833    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:10.435843    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:10.435856    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:10.441672    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:13:10.449849    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/force-systemd-env-176000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:13:10.450722    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:13:10.450745    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:13:10.450760    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:13:10.450775    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:13:10.828947    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:13:10.828963    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:13:10.943640    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:13:10.943658    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:13:10.943684    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:13:10.943702    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:13:10.944548    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:13:10.944559    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:10 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:13:12.436032    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 1
	I0806 01:13:12.436048    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:12.436122    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:12.436910    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:12.436975    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:12.436984    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:12.436991    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:12.436999    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:12.437010    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:12.437016    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:12.437026    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:12.437036    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:12.437042    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:12.437049    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:12.437057    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:12.437066    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:12.437079    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:12.437088    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:12.437095    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:12.437102    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:12.437112    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:12.437121    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:14.437553    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 2
	I0806 01:13:14.437600    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:14.437696    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:14.438603    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:14.438656    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:14.438669    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:14.438688    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:14.438700    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:14.438708    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:14.438715    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:14.438731    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:14.438741    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:14.438752    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:14.438761    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:14.438774    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:14.438780    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:14.438801    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:14.438812    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:14.438821    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:14.438829    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:14.438836    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:14.438841    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:16.320044    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:16 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 01:13:16.320164    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:16 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 01:13:16.320173    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:16 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 01:13:16.340110    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | 2024/08/06 01:13:16 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 01:13:16.438981    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 3
	I0806 01:13:16.439004    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:16.439127    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:16.440241    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:16.440320    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:16.440339    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:16.440349    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:16.440360    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:16.440380    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:16.440399    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:16.440410    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:16.440420    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:16.440429    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:16.440441    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:16.440474    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:16.440491    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:16.440504    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:16.440516    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:16.440527    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:16.440538    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:16.440551    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:16.440562    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:18.442150    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 4
	I0806 01:13:18.442166    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:18.442259    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:18.443044    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:18.443104    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:18.443116    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:18.443125    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:18.443131    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:18.443139    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:18.443145    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:18.443152    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:18.443159    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:18.443174    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:18.443180    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:18.443186    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:18.443195    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:18.443215    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:18.443223    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:18.443230    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:18.443236    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:18.443252    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:18.443263    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:20.444666    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 5
	I0806 01:13:20.444680    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:20.444730    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:20.445581    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:20.445627    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:20.445636    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:20.445646    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:20.445655    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:20.445669    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:20.445683    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:20.445701    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:20.445710    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:20.445718    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:20.445727    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:20.445739    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:20.445746    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:20.445754    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:20.445762    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:20.445778    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:20.445785    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:20.445800    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:20.445810    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:22.447287    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 6
	I0806 01:13:22.447302    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:22.447357    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:22.448138    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:22.448196    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:22.448218    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:22.448225    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:22.448255    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:22.448263    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:22.448276    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:22.448283    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:22.448292    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:22.448299    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:22.448306    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:22.448315    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:22.448324    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:22.448332    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:22.448339    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:22.448347    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:22.448356    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:22.448364    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:22.448373    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:24.449606    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 7
	I0806 01:13:24.449618    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:24.449724    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:24.450552    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:24.450580    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:24.450589    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:24.450619    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:24.450634    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:24.450645    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:24.450654    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:24.450661    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:24.450668    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:24.450682    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:24.450695    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:24.450705    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:24.450714    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:24.450725    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:24.450733    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:24.450740    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:24.450749    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:24.450756    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:24.450764    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:26.451447    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 8
	I0806 01:13:26.451467    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:26.451561    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:26.452331    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:26.452360    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:26.452371    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:26.452381    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:26.452389    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:26.452399    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:26.452406    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:26.452413    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:26.452419    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:26.452426    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:26.452434    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:26.452442    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:26.452451    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:26.452469    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:26.452482    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:26.452490    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:26.452498    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:26.452505    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:26.452513    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:28.454283    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 9
	I0806 01:13:28.454296    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:28.454457    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:28.455279    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:28.455317    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:28.455332    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:28.455342    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:28.455348    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:28.455355    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:28.455362    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:28.455368    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:28.455397    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:28.455412    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:28.455423    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:28.455440    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:28.455449    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:28.455457    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:28.455471    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:28.455488    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:28.455496    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:28.455503    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:28.455509    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:30.457379    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 10
	I0806 01:13:30.457393    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:30.457458    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:30.458350    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:30.458386    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:30.458399    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:30.458410    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:30.458418    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:30.458432    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:30.458444    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:30.458452    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:30.458466    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:30.458474    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:30.458479    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:30.458495    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:30.458507    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:30.458517    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:30.458524    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:30.458530    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:30.458536    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:30.458544    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:30.458555    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:32.460578    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 11
	I0806 01:13:32.460593    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:32.460678    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:32.461456    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:32.461508    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:32.461528    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:32.461539    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:32.461547    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:32.461555    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:32.461561    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:32.461569    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:32.461576    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:32.461584    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:32.461593    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:32.461608    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:32.461624    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:32.461635    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:32.461644    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:32.461651    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:32.461657    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:32.461677    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:32.461706    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:34.463116    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 12
	I0806 01:13:34.463129    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:34.463189    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:34.464059    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:34.464110    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:34.464119    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:34.464127    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:34.464155    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:34.464166    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:34.464174    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:34.464182    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:34.464203    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:34.464214    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:34.464223    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:34.464231    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:34.464239    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:34.464246    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:34.464252    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:34.464259    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:34.464291    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:34.464300    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:34.464316    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:36.464616    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 13
	I0806 01:13:36.464631    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:36.464721    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:36.465513    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:36.465556    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:36.465568    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:36.465584    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:36.465603    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:36.465614    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:36.465622    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:36.465629    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:36.465644    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:36.465660    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:36.465672    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:36.465683    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:36.465691    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:36.465703    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:36.465712    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:36.465721    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:36.465728    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:36.465734    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:36.465742    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:38.467767    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 14
	I0806 01:13:38.467780    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:38.467827    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:38.468672    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:38.468741    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:38.468752    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:38.468763    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:38.468770    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:38.468782    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:38.468797    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:38.468806    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:38.468813    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:38.468821    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:38.468827    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:38.468834    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:38.468841    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:38.468849    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:38.468857    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:38.468865    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:38.468874    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:38.468886    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:38.468897    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:40.470922    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 15
	I0806 01:13:40.470939    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:40.470994    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:40.471875    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:40.471926    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:40.471940    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:40.471949    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:40.471956    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:40.471964    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:40.471970    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:40.471990    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:40.472018    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:40.472031    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:40.472041    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:40.472050    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:40.472056    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:40.472067    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:40.472078    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:40.472084    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:40.472131    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:40.472168    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:40.472201    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:42.472968    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 16
	I0806 01:13:42.472981    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:42.473066    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:42.473851    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:42.473901    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:42.473914    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:42.473922    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:42.473928    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:42.473937    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:42.473942    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:42.473949    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:42.473957    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:42.473964    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:42.473972    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:42.473978    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:42.473994    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:42.474008    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:42.474019    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:42.474028    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:42.474041    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:42.474056    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:42.474071    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:44.475726    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 17
	I0806 01:13:44.475741    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:44.475853    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:44.476645    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:44.476681    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:44.476691    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:44.476710    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:44.476724    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:44.476732    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:44.476738    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:44.476750    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:44.476764    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:44.476784    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:44.476797    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:44.476807    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:44.476816    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:44.476829    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:44.476841    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:44.476854    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:44.476869    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:44.476884    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:44.476893    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:46.478109    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 18
	I0806 01:13:46.478126    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:46.478253    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:46.479030    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:46.479078    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:46.479089    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:46.479097    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:46.479103    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:46.479114    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:46.479126    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:46.479134    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:46.479142    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:46.479159    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:46.479170    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:46.479179    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:46.479188    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:46.479195    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:46.479202    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:46.479208    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:46.479215    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:46.479221    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:46.479230    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:48.481344    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 19
	I0806 01:13:48.481355    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:48.481412    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:48.482174    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:48.482225    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:48.482236    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:48.482245    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:48.482254    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:48.482263    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:48.482287    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:48.482313    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:48.482326    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:48.482337    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:48.482346    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:48.482354    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:48.482363    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:48.482378    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:48.482392    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:48.482416    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:48.482431    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:48.482441    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:48.482447    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:50.482651    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 20
	I0806 01:13:50.482676    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:50.482759    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:50.483631    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:50.483683    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:50.483696    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:50.483708    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:50.483716    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:50.483723    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:50.483732    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:50.483748    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:50.483759    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:50.483767    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:50.483782    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:50.483792    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:50.483799    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:50.483812    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:50.483823    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:50.483831    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:50.483839    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:50.483860    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:50.483877    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:52.484823    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 21
	I0806 01:13:52.484838    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:52.484944    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:52.485728    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:52.485788    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:52.485798    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:52.485809    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:52.485820    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:52.485829    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:52.485850    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:52.485863    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:52.485871    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:52.485878    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:52.485887    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:52.485897    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:52.485908    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:52.485916    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:52.485924    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:52.485930    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:52.485939    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:52.485950    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:52.485959    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:54.487847    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 22
	I0806 01:13:54.487865    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:54.487909    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:54.488848    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:54.488890    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:54.488902    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:54.488914    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:54.488924    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:54.488935    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:54.488944    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:54.488953    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:54.488961    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:54.488967    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:54.488976    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:54.488985    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:54.488991    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:54.489003    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:54.489015    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:54.489038    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:54.489068    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:54.489074    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:54.489081    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:56.490640    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 23
	I0806 01:13:56.490656    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:56.490749    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:56.491498    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:56.491549    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:56.491563    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:56.491579    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:56.491592    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:56.491601    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:56.491608    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:56.491615    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:56.491623    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:56.491631    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:56.491639    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:56.491655    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:56.491667    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:56.491675    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:56.491681    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:56.491687    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:56.491696    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:56.491720    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:56.491732    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:13:58.493761    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 24
	I0806 01:13:58.493775    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:13:58.493886    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:13:58.494637    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:13:58.494660    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:13:58.494668    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:13:58.494676    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:13:58.494682    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:13:58.494696    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:13:58.494707    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:13:58.494723    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:13:58.494732    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:13:58.494741    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:13:58.494750    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:13:58.494767    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:13:58.494780    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:13:58.494793    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:13:58.494801    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:13:58.494810    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:13:58.494819    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:13:58.494834    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:13:58.494847    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:00.495824    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 25
	I0806 01:14:00.495837    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:00.495895    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:14:00.496742    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:14:00.496789    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:00.496799    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:00.496808    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:00.496815    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:00.496829    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:00.496836    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:00.496842    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:00.496848    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:00.496855    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:00.496860    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:00.496867    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:00.496876    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:00.496896    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:00.496909    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:00.496918    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:00.496926    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:00.496938    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:00.496947    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:02.497259    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 26
	I0806 01:14:02.497272    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:02.497370    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:14:02.498145    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:14:02.498208    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:02.498220    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:02.498229    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:02.498235    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:02.498253    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:02.498264    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:02.498271    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:02.498277    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:02.498295    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:02.498318    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:02.498325    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:02.498334    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:02.498343    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:02.498350    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:02.498357    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:02.498364    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:02.498372    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:02.498389    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:04.498660    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 27
	I0806 01:14:04.498673    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:04.498810    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:14:04.499658    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:14:04.499712    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:04.499725    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:04.499734    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:04.499740    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:04.499764    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:04.499776    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:04.499785    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:04.499793    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:04.499799    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:04.499830    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:04.499843    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:04.499851    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:04.499857    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:04.499869    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:04.499880    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:04.499908    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:04.499924    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:04.499938    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:06.500478    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 28
	I0806 01:14:06.500494    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:06.500566    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:14:06.501409    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:14:06.501417    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:06.501427    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:06.501436    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:06.501444    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:06.501450    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:06.501457    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:06.501463    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:06.501469    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:06.501476    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:06.501496    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:06.501515    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:06.501527    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:06.501536    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:06.501556    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:06.501569    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:06.501581    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:06.501589    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:06.501595    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:08.502505    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Attempt 29
	I0806 01:14:08.502522    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:14:08.502532    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | hyperkit pid from json: 6726
	I0806 01:14:08.503319    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Searching for 82:54:6d:e6:8d:55 in /var/db/dhcpd_leases ...
	I0806 01:14:08.503345    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0806 01:14:08.503357    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:14:08.503375    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:14:08.503390    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:14:08.503400    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:14:08.503408    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:14:08.503416    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:14:08.503422    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:14:08.503428    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:14:08.503436    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:14:08.503445    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:14:08.503459    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:14:08.503467    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:14:08.503486    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:14:08.503499    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:14:08.503507    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:14:08.503516    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:14:08.503535    6653 main.go:141] libmachine: (force-systemd-env-176000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:14:10.504512    6653 client.go:171] duration metric: took 1m1.072000281s to LocalClient.Create
	I0806 01:14:12.505168    6653 start.go:128] duration metric: took 1m3.124810685s to createHost
	I0806 01:14:12.505195    6653 start.go:83] releasing machines lock for "force-systemd-env-176000", held for 1m3.124914728s
	W0806 01:14:12.505364    6653 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-176000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:54:6d:e6:8d:55
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-176000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:54:6d:e6:8d:55
	I0806 01:14:12.568835    6653 out.go:177] 
	W0806 01:14:12.589735    6653 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:54:6d:e6:8d:55
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 82:54:6d:e6:8d:55
	W0806 01:14:12.589748    6653 out.go:239] * 
	* 
	W0806 01:14:12.590362    6653 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:14:12.652711    6653 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-176000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-176000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-176000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (173.738837ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-176000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-176000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-06 01:14:12.938627 -0700 PDT m=+4222.106686554
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-176000 -n force-systemd-env-176000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-176000 -n force-systemd-env-176000: exit status 7 (77.906112ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:14:13.014582    6750 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:14:13.014605    6750 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-176000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-176000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-176000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-176000: (5.244424798s)
--- FAIL: TestForceSystemdEnv (233.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-772000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-772000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m16.011257663s)

                                                
                                                
-- stdout --
	* [ha-772000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	* Restarting existing hyperkit VM for "ha-772000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:28:25.336841    3609 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:28:25.337032    3609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:28:25.337038    3609 out.go:304] Setting ErrFile to fd 2...
	I0806 00:28:25.337041    3609 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:28:25.337222    3609 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:28:25.338536    3609 out.go:298] Setting JSON to false
	I0806 00:28:25.360999    3609 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1667,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:28:25.361097    3609 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:28:25.382950    3609 out.go:177] * [ha-772000] minikube v1.33.1 on Darwin 14.5
	I0806 00:28:25.424609    3609 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:28:25.424664    3609 notify.go:220] Checking for updates...
	I0806 00:28:25.467369    3609 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:28:25.488616    3609 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:28:25.509326    3609 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:28:25.530762    3609 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:28:25.551579    3609 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:28:25.572970    3609 config.go:182] Loaded profile config "ha-772000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:28:25.573604    3609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.573682    3609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:28:25.583447    3609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52050
	I0806 00:28:25.583833    3609 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:28:25.584323    3609 main.go:141] libmachine: Using API Version  1
	I0806 00:28:25.584342    3609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:28:25.584609    3609 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:28:25.584744    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:25.584932    3609 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:28:25.585172    3609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.585196    3609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:28:25.593757    3609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52052
	I0806 00:28:25.594139    3609 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:28:25.594480    3609 main.go:141] libmachine: Using API Version  1
	I0806 00:28:25.594501    3609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:28:25.594715    3609 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:28:25.594838    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:25.623602    3609 out.go:177] * Using the hyperkit driver based on existing profile
	I0806 00:28:25.665376    3609 start.go:297] selected driver: hyperkit
	I0806 00:28:25.665401    3609 start.go:901] validating driver "hyperkit" against &{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:28:25.665701    3609 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:28:25.665870    3609 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:28:25.666092    3609 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:28:25.675534    3609 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:28:25.679372    3609 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.679396    3609 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:28:25.682041    3609 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:28:25.682077    3609 cni.go:84] Creating CNI manager for ""
	I0806 00:28:25.682086    3609 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:28:25.682171    3609 start.go:340] cluster config:
	{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:28:25.682277    3609 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:28:25.724602    3609 out.go:177] * Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	I0806 00:28:25.745411    3609 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:28:25.745453    3609 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:28:25.745475    3609 cache.go:56] Caching tarball of preloaded images
	I0806 00:28:25.745589    3609 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:28:25.745605    3609 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:28:25.745702    3609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/ha-772000/config.json ...
	I0806 00:28:25.746157    3609 start.go:360] acquireMachinesLock for ha-772000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:28:25.746215    3609 start.go:364] duration metric: took 44.154µs to acquireMachinesLock for "ha-772000"
	I0806 00:28:25.746234    3609 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:28:25.746243    3609 fix.go:54] fixHost starting: 
	I0806 00:28:25.746468    3609 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.746491    3609 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:28:25.755075    3609 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52054
	I0806 00:28:25.755418    3609 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:28:25.755784    3609 main.go:141] libmachine: Using API Version  1
	I0806 00:28:25.755798    3609 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:28:25.756057    3609 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:28:25.756190    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:25.756286    3609 main.go:141] libmachine: (ha-772000) Calling .GetState
	I0806 00:28:25.756365    3609 main.go:141] libmachine: (ha-772000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:28:25.756437    3609 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid from json: 3478
	I0806 00:28:25.757376    3609 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid 3478 missing from process table
	I0806 00:28:25.757409    3609 fix.go:112] recreateIfNeeded on ha-772000: state=Stopped err=<nil>
	I0806 00:28:25.757426    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	W0806 00:28:25.757513    3609 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:28:25.799577    3609 out.go:177] * Restarting existing hyperkit VM for "ha-772000" ...
	I0806 00:28:25.820592    3609 main.go:141] libmachine: (ha-772000) Calling .Start
	I0806 00:28:25.820845    3609 main.go:141] libmachine: (ha-772000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:28:25.820888    3609 main.go:141] libmachine: (ha-772000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/hyperkit.pid
	I0806 00:28:25.822674    3609 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid 3478 missing from process table
	I0806 00:28:25.822686    3609 main.go:141] libmachine: (ha-772000) DBG | pid 3478 is in state "Stopped"
	I0806 00:28:25.822708    3609 main.go:141] libmachine: (ha-772000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/hyperkit.pid...
	I0806 00:28:25.822911    3609 main.go:141] libmachine: (ha-772000) DBG | Using UUID 13549de7-528d-4f5c-bbca-7d5140837d7f
	I0806 00:28:25.954969    3609 main.go:141] libmachine: (ha-772000) DBG | Generated MAC d2:ca:81:24:8f:65
	I0806 00:28:25.955013    3609 main.go:141] libmachine: (ha-772000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-772000
	I0806 00:28:25.955242    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"13549de7-528d-4f5c-bbca-7d5140837d7f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 00:28:25.955282    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"13549de7-528d-4f5c-bbca-7d5140837d7f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 00:28:25.955338    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "13549de7-528d-4f5c-bbca-7d5140837d7f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/ha-772000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/initrd,earlyprintk=serial l
oglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-772000"}
	I0806 00:28:25.955391    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 13549de7-528d-4f5c-bbca-7d5140837d7f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/ha-772000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset noresto
re waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-772000"
	I0806 00:28:25.955408    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:28:25.956939    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 DEBUG: hyperkit: Pid is 3622
	I0806 00:28:25.957325    3609 main.go:141] libmachine: (ha-772000) DBG | Attempt 0
	I0806 00:28:25.957339    3609 main.go:141] libmachine: (ha-772000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:28:25.957405    3609 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid from json: 3622
	I0806 00:28:25.959008    3609 main.go:141] libmachine: (ha-772000) DBG | Searching for d2:ca:81:24:8f:65 in /var/db/dhcpd_leases ...
	I0806 00:28:25.959103    3609 main.go:141] libmachine: (ha-772000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0806 00:28:25.959139    3609 main.go:141] libmachine: (ha-772000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:28:25.959153    3609 main.go:141] libmachine: (ha-772000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:28:25.959167    3609 main.go:141] libmachine: (ha-772000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:28:25.959183    3609 main.go:141] libmachine: (ha-772000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32147}
	I0806 00:28:25.959204    3609 main.go:141] libmachine: (ha-772000) DBG | Found match: d2:ca:81:24:8f:65
	I0806 00:28:25.959216    3609 main.go:141] libmachine: (ha-772000) DBG | IP: 192.169.0.5
	I0806 00:28:25.959267    3609 main.go:141] libmachine: (ha-772000) Calling .GetConfigRaw
	I0806 00:28:25.960033    3609 main.go:141] libmachine: (ha-772000) Calling .GetIP
	I0806 00:28:25.960244    3609 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/ha-772000/config.json ...
	I0806 00:28:25.960665    3609 machine.go:94] provisionDockerMachine start ...
	I0806 00:28:25.960677    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:25.960873    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:25.961020    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:25.961163    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:25.961279    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:25.961372    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:25.961514    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:25.961724    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:25.961733    3609 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:28:25.964603    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:25 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:28:26.023044    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:28:26.023758    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:28:26.023779    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:28:26.023788    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:28:26.023796    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:28:26.407005    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:28:26.407027    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:28:26.521722    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:28:26.521740    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:28:26.521747    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:28:26.521756    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:28:26.522589    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:28:26.522598    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:26 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:28:32.139355    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:32 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:28:32.139388    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:32 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:28:32.139400    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:32 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:28:32.163486    3609 main.go:141] libmachine: (ha-772000) DBG | 2024/08/06 00:28:32 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:28:37.021273    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:28:37.021305    3609 main.go:141] libmachine: (ha-772000) Calling .GetMachineName
	I0806 00:28:37.021454    3609 buildroot.go:166] provisioning hostname "ha-772000"
	I0806 00:28:37.021464    3609 main.go:141] libmachine: (ha-772000) Calling .GetMachineName
	I0806 00:28:37.021558    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.021651    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:37.021745    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.021844    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.021913    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:37.022040    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:37.022190    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:37.022198    3609 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-772000 && echo "ha-772000" | sudo tee /etc/hostname
	I0806 00:28:37.083961    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-772000
	
	I0806 00:28:37.083987    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.084107    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:37.084199    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.084290    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.084376    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:37.084497    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:37.084634    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:37.084645    3609 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-772000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-772000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-772000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:28:37.139328    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:28:37.139359    3609 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:28:37.139383    3609 buildroot.go:174] setting up certificates
	I0806 00:28:37.139398    3609 provision.go:84] configureAuth start
	I0806 00:28:37.139407    3609 main.go:141] libmachine: (ha-772000) Calling .GetMachineName
	I0806 00:28:37.139546    3609 main.go:141] libmachine: (ha-772000) Calling .GetIP
	I0806 00:28:37.139635    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.139718    3609 provision.go:143] copyHostCerts
	I0806 00:28:37.139747    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:28:37.139814    3609 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:28:37.139823    3609 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:28:37.139967    3609 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:28:37.140174    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:28:37.140215    3609 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:28:37.140221    3609 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:28:37.140301    3609 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:28:37.140445    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:28:37.140486    3609 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:28:37.140491    3609 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:28:37.140569    3609 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:28:37.140732    3609 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.ha-772000 san=[127.0.0.1 192.169.0.5 ha-772000 localhost minikube]
	I0806 00:28:37.195170    3609 provision.go:177] copyRemoteCerts
	I0806 00:28:37.195225    3609 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:28:37.195239    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.195361    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:37.195466    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.195549    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:37.195649    3609 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/id_rsa Username:docker}
	I0806 00:28:37.228012    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:28:37.228085    3609 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:28:37.247181    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:28:37.247258    3609 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0806 00:28:37.266241    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:28:37.266307    3609 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:28:37.285151    3609 provision.go:87] duration metric: took 145.737085ms to configureAuth
	I0806 00:28:37.285164    3609 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:28:37.285341    3609 config.go:182] Loaded profile config "ha-772000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:28:37.285355    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:37.285480    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.285572    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:37.285646    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.285722    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.285805    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:37.285923    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:37.286049    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:37.286057    3609 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:28:37.336775    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:28:37.336787    3609 buildroot.go:70] root file system type: tmpfs
	I0806 00:28:37.336864    3609 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:28:37.336877    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.337038    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:37.337154    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.337255    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.337344    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:37.337473    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:37.337616    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:37.337664    3609 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:28:37.396495    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:28:37.396524    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:37.396664    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:37.396751    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.396837    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:37.396957    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:37.397113    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:37.397260    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:37.397272    3609 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:28:39.071429    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:28:39.071445    3609 machine.go:97] duration metric: took 13.110739863s to provisionDockerMachine
	I0806 00:28:39.071462    3609 start.go:293] postStartSetup for "ha-772000" (driver="hyperkit")
	I0806 00:28:39.071471    3609 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:28:39.071480    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:39.071672    3609 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:28:39.071692    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:39.071818    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:39.071953    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:39.072087    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:39.072207    3609 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/id_rsa Username:docker}
	I0806 00:28:39.112155    3609 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:28:39.116283    3609 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:28:39.116296    3609 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:28:39.116403    3609 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:28:39.116590    3609 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:28:39.116596    3609 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:28:39.116807    3609 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:28:39.128246    3609 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:28:39.156160    3609 start.go:296] duration metric: took 84.687899ms for postStartSetup
	I0806 00:28:39.156185    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:39.156365    3609 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0806 00:28:39.156378    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:39.156491    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:39.156598    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:39.156700    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:39.156780    3609 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/id_rsa Username:docker}
	I0806 00:28:39.188263    3609 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0806 00:28:39.188318    3609 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0806 00:28:39.223437    3609 fix.go:56] duration metric: took 13.477149952s for fixHost
	I0806 00:28:39.223459    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:39.223597    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:39.223687    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:39.223777    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:39.223862    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:39.224002    3609 main.go:141] libmachine: Using SSH client type: native
	I0806 00:28:39.224149    3609 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80560c0] 0x8058e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0806 00:28:39.224156    3609 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:28:39.274409    3609 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929319.425606410
	
	I0806 00:28:39.274421    3609 fix.go:216] guest clock: 1722929319.425606410
	I0806 00:28:39.274426    3609 fix.go:229] Guest: 2024-08-06 00:28:39.42560641 -0700 PDT Remote: 2024-08-06 00:28:39.223449 -0700 PDT m=+13.921452878 (delta=202.15741ms)
	I0806 00:28:39.274445    3609 fix.go:200] guest clock delta is within tolerance: 202.15741ms
	I0806 00:28:39.274448    3609 start.go:83] releasing machines lock for "ha-772000", held for 13.528195587s
	I0806 00:28:39.274470    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:39.274600    3609 main.go:141] libmachine: (ha-772000) Calling .GetIP
	I0806 00:28:39.274719    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:39.275004    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:39.275103    3609 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:28:39.275183    3609 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:28:39.275209    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:39.275241    3609 ssh_runner.go:195] Run: cat /version.json
	I0806 00:28:39.275251    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:28:39.275305    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:39.275363    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:28:39.275412    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:39.275462    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:28:39.275493    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:39.275546    3609 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:28:39.275577    3609 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/id_rsa Username:docker}
	I0806 00:28:39.275613    3609 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/id_rsa Username:docker}
	I0806 00:28:39.357702    3609 ssh_runner.go:195] Run: systemctl --version
	I0806 00:28:39.363010    3609 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 00:28:39.367241    3609 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:28:39.367275    3609 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:28:39.380750    3609 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:28:39.380764    3609 start.go:495] detecting cgroup driver to use...
	I0806 00:28:39.380855    3609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:28:39.396731    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:28:39.405853    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:28:39.414914    3609 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:28:39.414956    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:28:39.423994    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:28:39.432917    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:28:39.441811    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:28:39.450816    3609 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:28:39.459806    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:28:39.468877    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:28:39.477972    3609 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:28:39.486895    3609 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:28:39.495062    3609 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:28:39.503313    3609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:28:39.605803    3609 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:28:39.625347    3609 start.go:495] detecting cgroup driver to use...
	I0806 00:28:39.625424    3609 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:28:39.644698    3609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:28:39.655730    3609 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:28:39.677175    3609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:28:39.687499    3609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:28:39.697687    3609 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:28:39.721421    3609 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:28:39.731480    3609 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:28:39.746497    3609 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:28:39.749388    3609 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:28:39.756482    3609 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:28:39.769886    3609 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:28:39.874746    3609 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:28:39.990045    3609 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:28:39.990123    3609 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:28:40.003071    3609 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:28:40.104095    3609 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:29:41.128038    3609 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.023780506s)
	I0806 00:29:41.128104    3609 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:29:41.164297    3609 out.go:177] 
	W0806 00:29:41.185846    3609 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:28:37 ha-772000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:28:37 ha-772000 dockerd[488]: time="2024-08-06T07:28:37.836841008Z" level=info msg="Starting up"
	Aug 06 07:28:37 ha-772000 dockerd[488]: time="2024-08-06T07:28:37.837377676Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:28:37 ha-772000 dockerd[488]: time="2024-08-06T07:28:37.837826847Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.855148492Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870313625Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870380325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870445376Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870480577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870609411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870652541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870780870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870823386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870854348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870883052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870994269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.871252443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.872854373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.872905148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873041794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873084704Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873195984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873317675Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.874900727Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.874961422Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875001666Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875048671Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875089731Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875159073Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875381403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875462578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875498081Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875528289Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875561897Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875598331Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875632034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875662571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875698646Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875731614Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875764188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875794364Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875832283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875864495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875929393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875974331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876010182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876040609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876068986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876097745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876126521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876156847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876185445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876213724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876243899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876274536Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876312578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876451385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876506857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876567896Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876610057Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876644613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876677527Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876706561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876738665Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876769760Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877517378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877599360Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877652023Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877666632Z" level=info msg="containerd successfully booted in 0.023336s"
	Aug 06 07:28:38 ha-772000 dockerd[488]: time="2024-08-06T07:28:38.861924492Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:28:38 ha-772000 dockerd[488]: time="2024-08-06T07:28:38.907453630Z" level=info msg="Loading containers: start."
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.084462375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.145038997Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.188383105Z" level=warning msg="error locating sandbox id 2a8264d9360ab1f07ebd53e254abba12ead7c70932ec076a65c988cabb1aadc6: sandbox 2a8264d9360ab1f07ebd53e254abba12ead7c70932ec076a65c988cabb1aadc6 not found"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.188618240Z" level=info msg="Loading containers: done."
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.199586658Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.199747156Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.219447144Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.219521582Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:28:39 ha-772000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.267824874Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.268766942Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:28:40 ha-772000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.269268884Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.269317952Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.269368391Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:28:41 ha-772000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:28:41 ha-772000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:28:41 ha-772000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:28:41 ha-772000 dockerd[1173]: time="2024-08-06T07:28:41.305630280Z" level=info msg="Starting up"
	Aug 06 07:29:41 ha-772000 dockerd[1173]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:29:41 ha-772000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:29:41 ha-772000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:29:41 ha-772000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:28:37 ha-772000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:28:37 ha-772000 dockerd[488]: time="2024-08-06T07:28:37.836841008Z" level=info msg="Starting up"
	Aug 06 07:28:37 ha-772000 dockerd[488]: time="2024-08-06T07:28:37.837377676Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:28:37 ha-772000 dockerd[488]: time="2024-08-06T07:28:37.837826847Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=495
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.855148492Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870313625Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870380325Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870445376Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870480577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870609411Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870652541Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870780870Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870823386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870854348Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870883052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.870994269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.871252443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.872854373Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.872905148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873041794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873084704Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873195984Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.873317675Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.874900727Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.874961422Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875001666Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875048671Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875089731Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875159073Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875381403Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875462578Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875498081Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875528289Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875561897Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875598331Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875632034Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875662571Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875698646Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875731614Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875764188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875794364Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875832283Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875864495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875929393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.875974331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876010182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876040609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876068986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876097745Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876126521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876156847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876185445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876213724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876243899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876274536Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876312578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876451385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876506857Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876567896Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876610057Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876644613Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876677527Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876706561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876738665Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.876769760Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877517378Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877599360Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877652023Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:28:37 ha-772000 dockerd[495]: time="2024-08-06T07:28:37.877666632Z" level=info msg="containerd successfully booted in 0.023336s"
	Aug 06 07:28:38 ha-772000 dockerd[488]: time="2024-08-06T07:28:38.861924492Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:28:38 ha-772000 dockerd[488]: time="2024-08-06T07:28:38.907453630Z" level=info msg="Loading containers: start."
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.084462375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.145038997Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.188383105Z" level=warning msg="error locating sandbox id 2a8264d9360ab1f07ebd53e254abba12ead7c70932ec076a65c988cabb1aadc6: sandbox 2a8264d9360ab1f07ebd53e254abba12ead7c70932ec076a65c988cabb1aadc6 not found"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.188618240Z" level=info msg="Loading containers: done."
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.199586658Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.199747156Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.219447144Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:28:39 ha-772000 dockerd[488]: time="2024-08-06T07:28:39.219521582Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:28:39 ha-772000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.267824874Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.268766942Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:28:40 ha-772000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.269268884Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.269317952Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:28:40 ha-772000 dockerd[488]: time="2024-08-06T07:28:40.269368391Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:28:41 ha-772000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:28:41 ha-772000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:28:41 ha-772000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:28:41 ha-772000 dockerd[1173]: time="2024-08-06T07:28:41.305630280Z" level=info msg="Starting up"
	Aug 06 07:29:41 ha-772000 dockerd[1173]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:29:41 ha-772000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:29:41 ha-772000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:29:41 ha-772000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:29:41.186012    3609 out.go:239] * 
	* 
	W0806 00:29:41.187297    3609 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:29:41.250864    3609 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-772000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 6 (153.684523ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:29:41.449216    3646 status.go:417] kubeconfig endpoint: get endpoint: "ha-772000" does not appear in /Users/jenkins/minikube-integration/19370-944/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (76.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-772000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugi
n\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fa
lse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 6 (142.695846ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:29:41.758871    3657 status.go:417] kubeconfig endpoint: get endpoint: "ha-772000" does not appear in /Users/jenkins/minikube-integration/19370-944/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-772000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-772000 --control-plane -v=7 --alsologtostderr: exit status 83 (150.255778ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-772000-m02 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:29:41.823927    3662 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:29:41.824124    3662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:29:41.824134    3662 out.go:304] Setting ErrFile to fd 2...
	I0806 00:29:41.824138    3662 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:29:41.824329    3662 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:29:41.824663    3662 mustload.go:65] Loading cluster: ha-772000
	I0806 00:29:41.824983    3662 config.go:182] Loaded profile config "ha-772000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:29:41.825327    3662 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:29:41.825374    3662 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:29:41.833569    3662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52100
	I0806 00:29:41.833997    3662 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:29:41.834418    3662 main.go:141] libmachine: Using API Version  1
	I0806 00:29:41.834438    3662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:29:41.834676    3662 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:29:41.834797    3662 main.go:141] libmachine: (ha-772000) Calling .GetState
	I0806 00:29:41.834886    3662 main.go:141] libmachine: (ha-772000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:29:41.834951    3662 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid from json: 3622
	I0806 00:29:41.835896    3662 host.go:66] Checking if "ha-772000" exists ...
	I0806 00:29:41.836135    3662 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:29:41.836159    3662 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:29:41.844454    3662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52102
	I0806 00:29:41.844780    3662 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:29:41.845122    3662 main.go:141] libmachine: Using API Version  1
	I0806 00:29:41.845136    3662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:29:41.845339    3662 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:29:41.845449    3662 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:29:41.845770    3662 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:29:41.845791    3662 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:29:41.853994    3662 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52104
	I0806 00:29:41.854318    3662 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:29:41.854663    3662 main.go:141] libmachine: Using API Version  1
	I0806 00:29:41.854679    3662 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:29:41.854871    3662 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:29:41.854991    3662 main.go:141] libmachine: (ha-772000-m02) Calling .GetState
	I0806 00:29:41.855077    3662 main.go:141] libmachine: (ha-772000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:29:41.855147    3662 main.go:141] libmachine: (ha-772000-m02) DBG | hyperkit pid from json: 3489
	I0806 00:29:41.856077    3662 main.go:141] libmachine: (ha-772000-m02) DBG | hyperkit pid 3489 missing from process table
	I0806 00:29:41.877708    3662 out.go:177] * The control-plane node ha-772000-m02 host is not running: state=Stopped
	I0806 00:29:41.898362    3662 out.go:177]   To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-772000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 6 (142.295761ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:29:42.052247    3667 status.go:417] kubeconfig endpoint: get endpoint: "ha-772000" does not appear in /Users/jenkins/minikube-integration/19370-944/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-772000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-772000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 6 (142.644179ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:29:42.362986    3678 status.go:417] kubeconfig endpoint: get endpoint: "ha-772000" does not appear in /Users/jenkins/minikube-integration/19370-944/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.8s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-243000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0806 00:33:22.324864    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-243000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.718372927s)

                                                
                                                
-- stdout --
	* [mount-start-1-243000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-243000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-243000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for de:f8:1d:fa:4f:c
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-243000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:f:9a:9:6b:f6
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f2:f:9a:9:6b:f6
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-243000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-243000 -n mount-start-1-243000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-243000 -n mount-start-1-243000: exit status 7 (76.469332ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 00:35:27.157375    4273 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 00:35:27.157397    4273 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-243000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.80s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (259.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-100000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0806 00:37:41.393066    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:38:22.329665    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:39:04.446585    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-100000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (4m16.761979802s)

                                                
                                                
-- stdout --
	* [multinode-100000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.13
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	* 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-100000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (2.124694967s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------|--------------------------|----------|---------|---------------------|---------------------|
	| Command |                   Args                   |         Profile          |   User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|--------------------------|----------|---------|---------------------|---------------------|
	| stop    | ha-772000 stop -v=7                      | ha-772000                | jenkins  | v1.33.1 | 06 Aug 24 00:28 PDT | 06 Aug 24 00:28 PDT |
	|         | --alsologtostderr                        |                          |          |         |                     |                     |
	| start   | -p ha-772000 --wait=true                 | ha-772000                | jenkins  | v1.33.1 | 06 Aug 24 00:28 PDT |                     |
	|         | -v=7 --alsologtostderr                   |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| node    | add -p ha-772000                         | ha-772000                | jenkins  | v1.33.1 | 06 Aug 24 00:29 PDT |                     |
	|         | --control-plane -v=7                     |                          |          |         |                     |                     |
	|         | --alsologtostderr                        |                          |          |         |                     |                     |
	| delete  | -p ha-772000                             | ha-772000                | jenkins  | v1.33.1 | 06 Aug 24 00:29 PDT | 06 Aug 24 00:29 PDT |
	| start   | -p image-036000                          | image-036000             | jenkins  | v1.33.1 | 06 Aug 24 00:29 PDT | 06 Aug 24 00:30 PDT |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-036000             | jenkins  | v1.33.1 | 06 Aug 24 00:30 PDT | 06 Aug 24 00:30 PDT |
	|         | ./testdata/image-build/test-normal       |                          |          |         |                     |                     |
	|         | -p image-036000                          |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-036000             | jenkins  | v1.33.1 | 06 Aug 24 00:30 PDT | 06 Aug 24 00:30 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                          |          |         |                     |                     |
	|         | --build-opt=no-cache                     |                          |          |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                          |          |         |                     |                     |
	|         | image-036000                             |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-036000             | jenkins  | v1.33.1 | 06 Aug 24 00:30 PDT | 06 Aug 24 00:30 PDT |
	|         | ./testdata/image-build/test-normal       |                          |          |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                          |          |         |                     |                     |
	|         | image-036000                             |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-036000             | jenkins  | v1.33.1 | 06 Aug 24 00:30 PDT | 06 Aug 24 00:30 PDT |
	|         | -f inner/Dockerfile                      |                          |          |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                          |          |         |                     |                     |
	|         | -p image-036000                          |                          |          |         |                     |                     |
	| delete  | -p image-036000                          | image-036000             | jenkins  | v1.33.1 | 06 Aug 24 00:30 PDT | 06 Aug 24 00:30 PDT |
	| start   | -p json-output-960000                    | json-output-960000       | testUser | v1.33.1 | 06 Aug 24 00:30 PDT | 06 Aug 24 00:31 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	|         | --memory=2200 --wait=true                |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| pause   | -p json-output-960000                    | json-output-960000       | testUser | v1.33.1 | 06 Aug 24 00:31 PDT | 06 Aug 24 00:31 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	| unpause | -p json-output-960000                    | json-output-960000       | testUser | v1.33.1 | 06 Aug 24 00:31 PDT | 06 Aug 24 00:31 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	| stop    | -p json-output-960000                    | json-output-960000       | testUser | v1.33.1 | 06 Aug 24 00:31 PDT | 06 Aug 24 00:31 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	| delete  | -p json-output-960000                    | json-output-960000       | jenkins  | v1.33.1 | 06 Aug 24 00:31 PDT | 06 Aug 24 00:31 PDT |
	| start   | -p json-output-error-140000              | json-output-error-140000 | jenkins  | v1.33.1 | 06 Aug 24 00:31 PDT |                     |
	|         | --memory=2200 --output=json              |                          |          |         |                     |                     |
	|         | --wait=true --driver=fail                |                          |          |         |                     |                     |
	| delete  | -p json-output-error-140000              | json-output-error-140000 | jenkins  | v1.33.1 | 06 Aug 24 00:31 PDT | 06 Aug 24 00:31 PDT |
	| start   | -p first-500000                          | first-500000             | jenkins  | v1.33.1 | 06 Aug 24 00:31 PDT | 06 Aug 24 00:32 PDT |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| start   | -p second-502000                         | second-502000            | jenkins  | v1.33.1 | 06 Aug 24 00:32 PDT | 06 Aug 24 00:32 PDT |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| delete  | -p second-502000                         | second-502000            | jenkins  | v1.33.1 | 06 Aug 24 00:32 PDT | 06 Aug 24 00:33 PDT |
	| delete  | -p first-500000                          | first-500000             | jenkins  | v1.33.1 | 06 Aug 24 00:33 PDT | 06 Aug 24 00:33 PDT |
	| start   | -p mount-start-1-243000                  | mount-start-1-243000     | jenkins  | v1.33.1 | 06 Aug 24 00:33 PDT |                     |
	|         | --memory=2048 --mount                    |                          |          |         |                     |                     |
	|         | --mount-gid 0 --mount-msize              |                          |          |         |                     |                     |
	|         | 6543 --mount-port 46464                  |                          |          |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes            |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| delete  | -p mount-start-2-257000                  | mount-start-2-257000     | jenkins  | v1.33.1 | 06 Aug 24 00:35 PDT | 06 Aug 24 00:35 PDT |
	| delete  | -p mount-start-1-243000                  | mount-start-1-243000     | jenkins  | v1.33.1 | 06 Aug 24 00:35 PDT | 06 Aug 24 00:35 PDT |
	| start   | -p multinode-100000                      | multinode-100000         | jenkins  | v1.33.1 | 06 Aug 24 00:35 PDT |                     |
	|         | --wait=true --memory=2200                |                          |          |         |                     |                     |
	|         | --nodes=2 -v=8                           |                          |          |         |                     |                     |
	|         | --alsologtostderr                        |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	|---------|------------------------------------------|--------------------------|----------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:15 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:15.493182809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:15 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:15.493265038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:18 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:18Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240730-75a5af0c: Status: Downloaded newer image for kindest/kindnetd:v20240730-75a5af0c"
	Aug 06 07:38:18 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:18.816692550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:18 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:18.816804296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:18 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:18.816841555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:18 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:18.816985506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120129490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120234227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120258660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	4a58bc5cb9c3e       cbb01a7bd410d                                                                              About a minute ago   Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3   About a minute ago   Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                              About a minute ago   Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                              About a minute ago   Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                              About a minute ago   Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                              About a minute ago   Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                              About a minute ago   Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:39:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:38:30 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:38:30 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:38:30 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:38:30 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     96s
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      96s
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 95s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  115s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  110s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           96s                  node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                80s                  kubelet          Node multinode-100000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.638271] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:56.789087Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T07:37:56.79064Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.790937Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-06T07:37:56.793629Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:37:56.793645Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:37:56.796498Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.796632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 07:39:51 up 4 min,  0 users,  load average: 0.16, 0.11, 0.04
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:38:19.108707       1 main.go:148] setting mtu 1500 for CNI 
	I0806 07:38:19.109066       1 main.go:178] kindnetd IP family: "ipv4"
	I0806 07:38:19.109256       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0806 07:38:19.607763       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0806 07:38:29.611043       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:38:29.611285       1 main.go:299] handling current node
	I0806 07:38:39.609806       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:38:39.609989       1 main.go:299] handling current node
	I0806 07:38:49.609926       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:38:49.610272       1 main.go:299] handling current node
	I0806 07:38:59.615107       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:38:59.615280       1 main.go:299] handling current node
	I0806 07:39:09.615947       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:39:09.616007       1 main.go:299] handling current node
	I0806 07:39:19.608422       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:39:19.608442       1 main.go:299] handling current node
	I0806 07:39:29.615520       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:39:29.615668       1 main.go:299] handling current node
	I0806 07:39:39.609414       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:39:39.609446       1 main.go:299] handling current node
	I0806 07:39:49.616416       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:39:49.616458       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6d93185f30a9] <==
	I0806 07:37:58.429208       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:37:58.429330       1 cache.go:39] Caches are synced for autoregister controller
	I0806 07:37:58.451567       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:37:58.455055       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:37:58.455074       1 policy_source.go:224] refreshing policies
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:38:14.733454       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0806 07:38:14.763664       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000" podCIDRs=["10.244.0.0/24"]
	I0806 07:38:14.826673       1 shared_informer.go:320] Caches are synced for HPA
	I0806 07:38:14.833253       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:38:14.864814       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 07:38:14.911267       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0806 07:38:14.915445       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 07:38:14.917635       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:38:15.016538       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0806 07:38:15.198343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="389.133142ms"
	I0806 07:38:15.220236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.849107ms"
	I0806 07:38:15.220368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.121µs"
	I0806 07:38:15.344428       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355235       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:38:15.401729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.655935ms"
	I0806 07:38:15.431945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.14675ms"
	I0806 07:38:15.458535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.562482ms"
	I0806 07:38:15.458649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.614µs"
	I0806 07:38:30.766337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.896µs"
	I0806 07:38:30.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.914µs"
	I0806 07:38:31.717892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.878µs"
	I0806 07:38:31.736658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.976174ms"
	I0806 07:38:31.737084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.186µs"
	I0806 07:38:34.714007       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833088    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84207ead-3403-4759-9bf2-ae0aa742699e-xtables-lock\") pod \"kindnet-g2xk7\" (UID: \"84207ead-3403-4759-9bf2-ae0aa742699e\") " pod="kube-system/kindnet-g2xk7"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833105    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f72beca3-9601-4aad-b3ba-33f8de5db052-xtables-lock\") pod \"kube-proxy-crsrr\" (UID: \"f72beca3-9601-4aad-b3ba-33f8de5db052\") " pod="kube-system/kube-proxy-crsrr"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833116    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/84207ead-3403-4759-9bf2-ae0aa742699e-cni-cfg\") pod \"kindnet-g2xk7\" (UID: \"84207ead-3403-4759-9bf2-ae0aa742699e\") " pod="kube-system/kindnet-g2xk7"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833128    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f72beca3-9601-4aad-b3ba-33f8de5db052-kube-proxy\") pod \"kube-proxy-crsrr\" (UID: \"f72beca3-9601-4aad-b3ba-33f8de5db052\") " pod="kube-system/kube-proxy-crsrr"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833141    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84207ead-3403-4759-9bf2-ae0aa742699e-lib-modules\") pod \"kindnet-g2xk7\" (UID: \"84207ead-3403-4759-9bf2-ae0aa742699e\") " pod="kube-system/kindnet-g2xk7"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833155    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f72beca3-9601-4aad-b3ba-33f8de5db052-lib-modules\") pod \"kube-proxy-crsrr\" (UID: \"f72beca3-9601-4aad-b3ba-33f8de5db052\") " pod="kube-system/kube-proxy-crsrr"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.833168    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vghj\" (UniqueName: \"kubernetes.io/projected/84207ead-3403-4759-9bf2-ae0aa742699e-kube-api-access-8vghj\") pod \"kindnet-g2xk7\" (UID: \"84207ead-3403-4759-9bf2-ae0aa742699e\") " pod="kube-system/kindnet-g2xk7"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.848194    2051 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 06 07:38:14 multinode-100000 kubelet[2051]: I0806 07:38:14.848622    2051 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 06 07:38:15 multinode-100000 kubelet[2051]: I0806 07:38:15.615092    2051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-crsrr" podStartSLOduration=1.615079997 podStartE2EDuration="1.615079997s" podCreationTimestamp="2024-08-06 07:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:38:15.614956607 +0000 UTC m=+15.275385991" watchObservedRunningTime="2024-08-06 07:38:15.615079997 +0000 UTC m=+15.275509380"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.747638    2051 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.764195    2051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g2xk7" podStartSLOduration=13.480337397 podStartE2EDuration="16.764183053s" podCreationTimestamp="2024-08-06 07:38:14 +0000 UTC" firstStartedPulling="2024-08-06 07:38:15.468236664 +0000 UTC m=+15.128666040" lastFinishedPulling="2024-08-06 07:38:18.752082321 +0000 UTC m=+18.412511696" observedRunningTime="2024-08-06 07:38:19.63204653 +0000 UTC m=+19.292475913" watchObservedRunningTime="2024-08-06 07:38:30.764183053 +0000 UTC m=+30.424612430"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.764432    2051 topology_manager.go:215] "Topology Admit Handler" podUID="80bd44de-6f91-4e47-8832-a66b3c64808d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-snf8h"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.766673    2051 topology_manager.go:215] "Topology Admit Handler" podUID="38b20fa5-6002-4e12-860f-1aa0047581b1" podNamespace="kube-system" podName="storage-provisioner"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.850584    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhwl9\" (UniqueName: \"kubernetes.io/projected/80bd44de-6f91-4e47-8832-a66b3c64808d-kube-api-access-zhwl9\") pod \"coredns-7db6d8ff4d-snf8h\" (UID: \"80bd44de-6f91-4e47-8832-a66b3c64808d\") " pod="kube-system/coredns-7db6d8ff4d-snf8h"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.850814    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx958\" (UniqueName: \"kubernetes.io/projected/38b20fa5-6002-4e12-860f-1aa0047581b1-kube-api-access-xx958\") pod \"storage-provisioner\" (UID: \"38b20fa5-6002-4e12-860f-1aa0047581b1\") " pod="kube-system/storage-provisioner"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.851027    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80bd44de-6f91-4e47-8832-a66b3c64808d-config-volume\") pod \"coredns-7db6d8ff4d-snf8h\" (UID: \"80bd44de-6f91-4e47-8832-a66b3c64808d\") " pod="kube-system/coredns-7db6d8ff4d-snf8h"
	Aug 06 07:38:30 multinode-100000 kubelet[2051]: I0806 07:38:30.851295    2051 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38b20fa5-6002-4e12-860f-1aa0047581b1-tmp\") pod \"storage-provisioner\" (UID: \"38b20fa5-6002-4e12-860f-1aa0047581b1\") " pod="kube-system/storage-provisioner"
	Aug 06 07:38:31 multinode-100000 kubelet[2051]: I0806 07:38:31.706475    2051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.70646273 podStartE2EDuration="16.70646273s" podCreationTimestamp="2024-08-06 07:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:38:31.706289287 +0000 UTC m=+31.366718670" watchObservedRunningTime="2024-08-06 07:38:31.70646273 +0000 UTC m=+31.366892108"
	Aug 06 07:38:31 multinode-100000 kubelet[2051]: I0806 07:38:31.719595    2051 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-snf8h" podStartSLOduration=17.719580707 podStartE2EDuration="17.719580707s" podCreationTimestamp="2024-08-06 07:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 07:38:31.716902634 +0000 UTC m=+31.377332017" watchObservedRunningTime="2024-08-06 07:38:31.719580707 +0000 UTC m=+31.380010083"
	Aug 06 07:39:00 multinode-100000 kubelet[2051]: E0806 07:39:00.482940    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:39:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:39:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:39:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:39:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [47e0c0c6895e] <==
	I0806 07:38:31.347790       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:38:31.362641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:38:31.362689       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:38:31.380276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:38:31.381044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56!
	I0806 07:38:31.382785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"161b611b-7c0d-4908-b494-e0f62b136e12", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56 became leader
	I0806 07:38:31.481893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (259.59s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (711.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- rollout status deployment/busybox
E0806 00:42:41.398340    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:43:22.336024    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:46:25.393322    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:47:41.403053    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:48:22.342859    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-100000 -- rollout status deployment/busybox: exit status 1 (10m3.505324045s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- nslookup kubernetes.io: exit status 1 (119.002595ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-6l7f2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-6l7f2 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-dzbn7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- nslookup kubernetes.default: exit status 1 (118.569122ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-6l7f2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-6l7f2 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-dzbn7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (118.668846ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-6l7f2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-6l7f2 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-dzbn7 -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (2.051397295s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p first-500000                                   | first-500000         | jenkins | v1.33.1 | 06 Aug 24 00:33 PDT | 06 Aug 24 00:33 PDT |
	| start   | -p mount-start-1-243000                           | mount-start-1-243000 | jenkins | v1.33.1 | 06 Aug 24 00:33 PDT |                     |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-257000                           | mount-start-2-257000 | jenkins | v1.33.1 | 06 Aug 24 00:35 PDT | 06 Aug 24 00:35 PDT |
	| delete  | -p mount-start-1-243000                           | mount-start-1-243000 | jenkins | v1.33.1 | 06 Aug 24 00:35 PDT | 06 Aug 24 00:35 PDT |
	| start   | -p multinode-100000                               | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:35 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- apply -f                   | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT | 06 Aug 24 00:39 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- rollout                    | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000     | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415212392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415272093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415281683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415427967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/730773bd53054521739eb2bf3731e90f06df86c05a2f2435964943abea426db3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:39:54 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619368219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619377598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619772649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              13 minutes ago      Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         13 minutes ago      Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         13 minutes ago      Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         13 minutes ago      Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         13 minutes ago      Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         13 minutes ago      Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-100000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 6 07:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:56.793629Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:37:56.793645Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:37:56.796498Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.796632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	
	
	==> kernel <==
	 07:51:42 up 16 min,  0 users,  load average: 0.01, 0.07, 0.04
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:49:39.617585       1 main.go:299] handling current node
	I0806 07:49:49.609464       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:49:49.609605       1 main.go:299] handling current node
	I0806 07:49:59.610257       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:49:59.610324       1 main.go:299] handling current node
	I0806 07:50:09.617433       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:09.617548       1 main.go:299] handling current node
	I0806 07:50:19.609011       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:19.609119       1 main.go:299] handling current node
	I0806 07:50:29.613066       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:29.613117       1 main.go:299] handling current node
	I0806 07:50:39.608584       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:39.608693       1 main.go:299] handling current node
	I0806 07:50:49.609744       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:49.609775       1 main.go:299] handling current node
	I0806 07:50:59.609097       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:59.609130       1 main.go:299] handling current node
	I0806 07:51:09.609598       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:09.609738       1 main.go:299] handling current node
	I0806 07:51:19.608251       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:19.608633       1 main.go:299] handling current node
	I0806 07:51:29.610799       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:29.611016       1 main.go:299] handling current node
	I0806 07:51:39.608566       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:39.608751       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6d93185f30a9] <==
	I0806 07:37:58.455055       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:37:58.455074       1 policy_source.go:224] refreshing policies
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0806 07:51:40.593542       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52513: use of closed network connection
	E0806 07:51:40.913864       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52518: use of closed network connection
	E0806 07:51:41.219815       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52523: use of closed network connection
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:38:14.911267       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0806 07:38:14.915445       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 07:38:14.917635       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:38:15.016538       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0806 07:38:15.198343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="389.133142ms"
	I0806 07:38:15.220236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.849107ms"
	I0806 07:38:15.220368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.121µs"
	I0806 07:38:15.344428       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355235       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:38:15.401729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.655935ms"
	I0806 07:38:15.431945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.14675ms"
	I0806 07:38:15.458535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.562482ms"
	I0806 07:38:15.458649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.614µs"
	I0806 07:38:30.766337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.896µs"
	I0806 07:38:30.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.914µs"
	I0806 07:38:31.717892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.878µs"
	I0806 07:38:31.736658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.976174ms"
	I0806 07:38:31.737084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.186µs"
	I0806 07:38:34.714007       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0806 07:39:52.487758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.078135ms"
	I0806 07:39:52.498018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.216294ms"
	I0806 07:39:52.498073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.228µs"
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:47:00 multinode-100000 kubelet[2051]: E0806 07:47:00.482719    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:48:00 multinode-100000 kubelet[2051]: E0806 07:48:00.482201    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:49:00 multinode-100000 kubelet[2051]: E0806 07:49:00.485250    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:50:00 multinode-100000 kubelet[2051]: E0806 07:50:00.481450    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:51:00 multinode-100000 kubelet[2051]: E0806 07:51:00.483720    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [47e0c0c6895e] <==
	I0806 07:38:31.347790       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:38:31.362641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:38:31.362689       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:38:31.380276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:38:31.381044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56!
	I0806 07:38:31.382785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"161b611b-7c0d-4908-b494-e0f62b136e12", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56 became leader
	I0806 07:38:31.481893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-6l7f2
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-100000 describe pod busybox-fc5497c4f-6l7f2
helpers_test.go:282: (dbg) kubectl --context multinode-100000 describe pod busybox-fc5497c4f-6l7f2:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-6l7f2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lx7j (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4lx7j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  101s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (711.58s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-6l7f2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (119.655019ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-6l7f2 does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-6l7f2 could not resolve 'host.minikube.internal': exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-dzbn7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-100000 -- exec busybox-fc5497c4f-dzbn7 -- sh -c "ping -c 1 192.169.0.1"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (2.071816106s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| start   | -p multinode-100000                               | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:35 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |         |         |                     |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- apply -f                   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT | 06 Aug 24 00:39 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- rollout                    | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415212392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415272093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415281683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415427967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/730773bd53054521739eb2bf3731e90f06df86c05a2f2435964943abea426db3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:39:54 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619368219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619377598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619772649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              13 minutes ago      Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         13 minutes ago      Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         13 minutes ago      Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         13 minutes ago      Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         13 minutes ago      Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         13 minutes ago      Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	[INFO] 10.244.0.3:47333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000847s
	[INFO] 10.244.0.3:41915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057433s
	[INFO] 10.244.0.3:34860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071303s
	[INFO] 10.244.0.3:46952 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000111703s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:51:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-100000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 6 07:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:56.793629Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:37:56.793645Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:37:56.796498Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.796632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	
	
	==> kernel <==
	 07:51:45 up 16 min,  0 users,  load average: 0.09, 0.08, 0.05
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:49:39.617585       1 main.go:299] handling current node
	I0806 07:49:49.609464       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:49:49.609605       1 main.go:299] handling current node
	I0806 07:49:59.610257       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:49:59.610324       1 main.go:299] handling current node
	I0806 07:50:09.617433       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:09.617548       1 main.go:299] handling current node
	I0806 07:50:19.609011       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:19.609119       1 main.go:299] handling current node
	I0806 07:50:29.613066       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:29.613117       1 main.go:299] handling current node
	I0806 07:50:39.608584       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:39.608693       1 main.go:299] handling current node
	I0806 07:50:49.609744       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:49.609775       1 main.go:299] handling current node
	I0806 07:50:59.609097       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:50:59.609130       1 main.go:299] handling current node
	I0806 07:51:09.609598       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:09.609738       1 main.go:299] handling current node
	I0806 07:51:19.608251       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:19.608633       1 main.go:299] handling current node
	I0806 07:51:29.610799       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:29.611016       1 main.go:299] handling current node
	I0806 07:51:39.608566       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:39.608751       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6d93185f30a9] <==
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0806 07:51:40.593542       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52513: use of closed network connection
	E0806 07:51:40.913864       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52518: use of closed network connection
	E0806 07:51:41.219815       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52523: use of closed network connection
	E0806 07:51:44.319914       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52554: use of closed network connection
	E0806 07:51:44.505332       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52556: use of closed network connection
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:38:14.911267       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0806 07:38:14.915445       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 07:38:14.917635       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:38:15.016538       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I0806 07:38:15.198343       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="389.133142ms"
	I0806 07:38:15.220236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.849107ms"
	I0806 07:38:15.220368       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="94.121µs"
	I0806 07:38:15.344428       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355235       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:38:15.401729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.655935ms"
	I0806 07:38:15.431945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.14675ms"
	I0806 07:38:15.458535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.562482ms"
	I0806 07:38:15.458649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.614µs"
	I0806 07:38:30.766337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.896µs"
	I0806 07:38:30.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.914µs"
	I0806 07:38:31.717892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.878µs"
	I0806 07:38:31.736658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.976174ms"
	I0806 07:38:31.737084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.186µs"
	I0806 07:38:34.714007       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0806 07:39:52.487758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.078135ms"
	I0806 07:39:52.498018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.216294ms"
	I0806 07:39:52.498073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.228µs"
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:47:00 multinode-100000 kubelet[2051]: E0806 07:47:00.482719    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:47:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:48:00 multinode-100000 kubelet[2051]: E0806 07:48:00.482201    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:49:00 multinode-100000 kubelet[2051]: E0806 07:49:00.485250    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:50:00 multinode-100000 kubelet[2051]: E0806 07:50:00.481450    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:51:00 multinode-100000 kubelet[2051]: E0806 07:51:00.483720    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [47e0c0c6895e] <==
	I0806 07:38:31.347790       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0806 07:38:31.362641       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0806 07:38:31.362689       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0806 07:38:31.380276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0806 07:38:31.381044       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56!
	I0806 07:38:31.382785       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"161b611b-7c0d-4908-b494-e0f62b136e12", APIVersion:"v1", ResourceVersion:"439", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56 became leader
	I0806 07:38:31.481893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-100000_c7848ced-7c56-4ea5-84d6-257282f6fd56!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-6l7f2
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-100000 describe pod busybox-fc5497c4f-6l7f2
helpers_test.go:282: (dbg) kubectl --context multinode-100000 describe pod busybox-fc5497c4f-6l7f2:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-6l7f2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4lx7j (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4lx7j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  105s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.29s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-100000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-100000 -v 3 --alsologtostderr: (44.663001152s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr: exit status 2 (326.819155ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:52:31.809806    5111 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:52:31.810076    5111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:52:31.810082    5111 out.go:304] Setting ErrFile to fd 2...
	I0806 00:52:31.810085    5111 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:52:31.810278    5111 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:52:31.810481    5111 out.go:298] Setting JSON to false
	I0806 00:52:31.810503    5111 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:52:31.810543    5111 notify.go:220] Checking for updates...
	I0806 00:52:31.810870    5111 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:52:31.810886    5111 status.go:255] checking status of multinode-100000 ...
	I0806 00:52:31.811333    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:31.811383    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:31.820393    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52621
	I0806 00:52:31.820733    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:31.821161    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:31.821173    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:31.821427    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:31.821560    5111 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:52:31.821657    5111 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:31.821742    5111 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:52:31.822703    5111 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:52:31.822721    5111 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:52:31.822976    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:31.822995    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:31.831635    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52623
	I0806 00:52:31.831967    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:31.832318    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:31.832334    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:31.832613    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:31.832749    5111 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:52:31.832830    5111 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:52:31.833086    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:31.833115    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:31.842189    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52625
	I0806 00:52:31.842542    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:31.842857    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:31.842867    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:31.843106    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:31.843221    5111 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:52:31.843365    5111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:31.843384    5111 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:52:31.843468    5111 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:52:31.843553    5111 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:52:31.843643    5111 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:52:31.843728    5111 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:52:31.880587    5111 ssh_runner.go:195] Run: systemctl --version
	I0806 00:52:31.885375    5111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:31.900394    5111 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:52:31.900420    5111 api_server.go:166] Checking apiserver status ...
	I0806 00:52:31.900466    5111 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:52:31.913285    5111 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:52:31.921163    5111 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:52:31.921226    5111 ssh_runner.go:195] Run: ls
	I0806 00:52:31.924699    5111 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:52:31.928714    5111 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:52:31.928729    5111 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:52:31.928740    5111 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:52:31.928755    5111 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:52:31.929038    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:31.929065    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:31.938058    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52629
	I0806 00:52:31.938403    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:31.938759    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:31.938783    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:31.939005    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:31.939124    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:52:31.939216    5111 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:31.939308    5111 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:52:31.940284    5111 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:52:31.940295    5111 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:52:31.940556    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:31.940581    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:31.949447    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52631
	I0806 00:52:31.949824    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:31.950171    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:31.950196    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:31.950446    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:31.950568    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:52:31.950669    5111 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:52:31.950947    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:31.950973    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:31.959541    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52633
	I0806 00:52:31.959872    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:31.960225    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:31.960242    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:31.960465    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:31.960567    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:52:31.960686    5111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:31.960697    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:52:31.960773    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:52:31.960859    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:52:31.960972    5111 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:52:31.961043    5111 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:52:31.996379    5111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:32.006490    5111 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:52:32.006506    5111 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:52:32.006792    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:32.006814    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:32.015261    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52636
	I0806 00:52:32.015615    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:32.015932    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:32.015944    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:32.016183    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:32.016303    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:52:32.016385    5111 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:32.016470    5111 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5072
	I0806 00:52:32.017422    5111 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:52:32.017441    5111 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:52:32.017696    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:32.017720    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:32.026154    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52638
	I0806 00:52:32.026473    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:32.026775    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:32.026785    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:32.027003    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:32.027114    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:52:32.027198    5111 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:52:32.027479    5111 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:32.027505    5111 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:32.035840    5111 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52640
	I0806 00:52:32.036187    5111 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:32.036537    5111 main.go:141] libmachine: Using API Version  1
	I0806 00:52:32.036554    5111 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:32.036753    5111 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:32.036858    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:52:32.036991    5111 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:32.037002    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:52:32.037080    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:52:32.037164    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:52:32.037249    5111 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:52:32.037329    5111 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:52:32.070489    5111 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:32.081584    5111 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:129: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (1.965256182s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-100000 -- apply -f                   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT | 06 Aug 24 00:39 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- rollout                    | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                  |         |         |                     |                     |
	| node    | add -p multinode-100000 -v 3                      | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:52 PDT |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415212392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415272093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415281683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415427967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/730773bd53054521739eb2bf3731e90f06df86c05a2f2435964943abea426db3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:39:54 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619368219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619377598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619772649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              14 minutes ago      Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         14 minutes ago      Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         14 minutes ago      Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         14 minutes ago      Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         14 minutes ago      Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	[INFO] 10.244.0.3:47333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000847s
	[INFO] 10.244.0.3:41915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057433s
	[INFO] 10.244.0.3:34860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071303s
	[INFO] 10.244.0.3:46952 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000111703s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                14m                kubelet          Node multinode-100000 status is now: NodeReady
	
	
	Name:               multinode-100000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T00_52_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:52:07 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-100000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4dd3c8067364c01aff8902f752ac959
	  System UUID:                83a944ea-0000-0000-930f-df1a6331c821
	  Boot ID:                    dc071d27-e6bc-46d1-9730-b50a8d4da1b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6l7f2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-dn72w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26s
	  kube-system                 kube-proxy-d9c42           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientMemory  26s (x2 over 26s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x2 over 26s)  kubelet          Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x2 over 26s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24s                node-controller  Node multinode-100000-m03 event: Registered Node multinode-100000-m03 in Controller
	  Normal  NodeReady                3s                 kubelet          Node multinode-100000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 6 07:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:56.793645Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:37:56.796498Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.796632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T07:52:10.269202Z","caller":"traceutil/trace.go:171","msg":"trace[808197773] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"104.082235ms","start":"2024-08-06T07:52:10.165072Z","end":"2024-08-06T07:52:10.269154Z","steps":["trace[808197773] 'process raft request'  (duration: 103.999362ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:52:33 up 16 min,  0 users,  load average: 0.56, 0.19, 0.08
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:51:09.609598       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:09.609738       1 main.go:299] handling current node
	I0806 07:51:19.608251       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:19.608633       1 main.go:299] handling current node
	I0806 07:51:29.610799       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:29.611016       1 main.go:299] handling current node
	I0806 07:51:39.608566       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:39.608751       1 main.go:299] handling current node
	I0806 07:51:49.609079       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:49.609255       1 main.go:299] handling current node
	I0806 07:51:59.615217       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:59.615256       1 main.go:299] handling current node
	I0806 07:52:09.608220       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:09.608290       1 main.go:299] handling current node
	I0806 07:52:09.608308       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:09.608317       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:09.608837       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0806 07:52:19.608568       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:19.608810       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:19.608997       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:19.609157       1 main.go:299] handling current node
	I0806 07:52:29.618338       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:29.618506       1 main.go:299] handling current node
	I0806 07:52:29.618578       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:29.618615       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6d93185f30a9] <==
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0806 07:51:40.593542       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52513: use of closed network connection
	E0806 07:51:40.913864       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52518: use of closed network connection
	E0806 07:51:41.219815       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52523: use of closed network connection
	E0806 07:51:44.319914       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52554: use of closed network connection
	E0806 07:51:44.505332       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52556: use of closed network connection
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:38:15.355219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355235       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:38:15.401729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.655935ms"
	I0806 07:38:15.431945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.14675ms"
	I0806 07:38:15.458535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.562482ms"
	I0806 07:38:15.458649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.614µs"
	I0806 07:38:30.766337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.896µs"
	I0806 07:38:30.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.914µs"
	I0806 07:38:31.717892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.878µs"
	I0806 07:38:31.736658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.976174ms"
	I0806 07:38:31.737084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.186µs"
	I0806 07:38:34.714007       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0806 07:39:52.487758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.078135ms"
	I0806 07:39:52.498018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.216294ms"
	I0806 07:39:52.498073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.228µs"
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	I0806 07:52:07.325935       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:52:07.342865       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:09.851060       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-100000-m03"
	I0806 07:52:30.373055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:52:30.382873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.276µs"
	I0806 07:52:30.391038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.602µs"
	I0806 07:52:32.408559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.578386ms"
	I0806 07:52:32.408616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.014µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:48:00 multinode-100000 kubelet[2051]: E0806 07:48:00.482201    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:49:00 multinode-100000 kubelet[2051]: E0806 07:49:00.485250    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:50:00 multinode-100000 kubelet[2051]: E0806 07:50:00.481450    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:51:00 multinode-100000 kubelet[2051]: E0806 07:51:00.483720    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:52:00 multinode-100000 kubelet[2051]: E0806 07:52:00.481620    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (47.51s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status --output json --alsologtostderr: exit status 2 (315.706827ms)

                                                
                                                
-- stdout --
	[{"Name":"multinode-100000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-100000-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-100000-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:52:34.900081    5145 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:52:34.900246    5145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:52:34.900251    5145 out.go:304] Setting ErrFile to fd 2...
	I0806 00:52:34.900255    5145 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:52:34.900440    5145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:52:34.900612    5145 out.go:298] Setting JSON to true
	I0806 00:52:34.900634    5145 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:52:34.900674    5145 notify.go:220] Checking for updates...
	I0806 00:52:34.900930    5145 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:52:34.900945    5145 status.go:255] checking status of multinode-100000 ...
	I0806 00:52:34.901287    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:34.901344    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:34.910010    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52676
	I0806 00:52:34.910334    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:34.910747    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:34.910758    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:34.911208    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:34.911346    5145 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:52:34.911430    5145 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:34.911500    5145 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:52:34.912491    5145 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:52:34.912510    5145 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:52:34.912746    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:34.912768    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:34.921250    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52678
	I0806 00:52:34.921600    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:34.921963    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:34.921989    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:34.922221    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:34.922343    5145 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:52:34.922423    5145 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:52:34.922671    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:34.922695    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:34.931741    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52680
	I0806 00:52:34.932072    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:34.932396    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:34.932407    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:34.932613    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:34.932726    5145 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:52:34.932876    5145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:34.932897    5145 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:52:34.932972    5145 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:52:34.933054    5145 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:52:34.933136    5145 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:52:34.933226    5145 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:52:34.969297    5145 ssh_runner.go:195] Run: systemctl --version
	I0806 00:52:34.973875    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:34.984644    5145 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:52:34.984668    5145 api_server.go:166] Checking apiserver status ...
	I0806 00:52:34.984703    5145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:52:34.995513    5145 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:52:35.003005    5145 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:52:35.003049    5145 ssh_runner.go:195] Run: ls
	I0806 00:52:35.006110    5145 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:52:35.009369    5145 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:52:35.009380    5145 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:52:35.009389    5145 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:52:35.009400    5145 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:52:35.009664    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:35.009684    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:35.018310    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52684
	I0806 00:52:35.018655    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:35.018985    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:35.018995    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:35.019185    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:35.019429    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:52:35.019515    5145 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:35.019588    5145 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:52:35.020566    5145 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:52:35.020576    5145 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:52:35.020836    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:35.020857    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:35.029408    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52686
	I0806 00:52:35.029729    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:35.030036    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:35.030047    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:35.030248    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:35.030353    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:52:35.030435    5145 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:52:35.030681    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:35.030702    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:35.039144    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52688
	I0806 00:52:35.039474    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:35.039801    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:35.039815    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:35.040036    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:35.040152    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:52:35.040280    5145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:35.040291    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:52:35.040370    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:52:35.040443    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:52:35.040532    5145 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:52:35.040606    5145 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:52:35.075719    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:35.085495    5145 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:52:35.085510    5145 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:52:35.085773    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:35.085794    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:35.094431    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52691
	I0806 00:52:35.094770    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:35.095087    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:35.095101    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:35.095300    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:35.095404    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:52:35.095487    5145 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:35.095556    5145 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5072
	I0806 00:52:35.096525    5145 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:52:35.096537    5145 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:52:35.096783    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:35.096808    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:35.105235    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52693
	I0806 00:52:35.105572    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:35.105907    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:35.105922    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:35.106122    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:35.106243    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:52:35.106323    5145 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:52:35.106567    5145 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:35.106597    5145 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:35.115055    5145 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52695
	I0806 00:52:35.115413    5145 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:35.115778    5145 main.go:141] libmachine: Using API Version  1
	I0806 00:52:35.115793    5145 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:35.116191    5145 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:35.116315    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:52:35.116455    5145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:35.116467    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:52:35.116547    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:52:35.116632    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:52:35.116713    5145 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:52:35.116790    5145 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:52:35.150222    5145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:35.161609    5145 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-100000 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (1.909993551s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-100000 -- apply -f                   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT | 06 Aug 24 00:39 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- rollout                    | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o                | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec                       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                  |         |         |                     |                     |
	| node    | add -p multinode-100000 -v 3                      | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:52 PDT |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415212392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415272093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415281683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415427967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/730773bd53054521739eb2bf3731e90f06df86c05a2f2435964943abea426db3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:39:54 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619368219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619377598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619772649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              14 minutes ago      Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         14 minutes ago      Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         14 minutes ago      Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         14 minutes ago      Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         14 minutes ago      Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	[INFO] 10.244.0.3:47333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000847s
	[INFO] 10.244.0.3:41915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057433s
	[INFO] 10.244.0.3:34860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071303s
	[INFO] 10.244.0.3:46952 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000111703s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                14m                kubelet          Node multinode-100000 status is now: NodeReady
	
	
	Name:               multinode-100000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T00_52_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:52:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:52:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:52:30 +0000   Tue, 06 Aug 2024 07:52:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-100000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4dd3c8067364c01aff8902f752ac959
	  System UUID:                83a944ea-0000-0000-930f-df1a6331c821
	  Boot ID:                    dc071d27-e6bc-46d1-9730-b50a8d4da1b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6l7f2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-dn72w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29s
	  kube-system                 kube-proxy-d9c42           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22s                kube-proxy       
	  Normal  NodeHasSufficientMemory  29s (x2 over 29s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s (x2 over 29s)  kubelet          Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s (x2 over 29s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node multinode-100000-m03 event: Registered Node multinode-100000-m03 in Controller
	  Normal  NodeReady                6s                 kubelet          Node multinode-100000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 6 07:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:56.793645Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:37:56.796498Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.796632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T07:52:10.269202Z","caller":"traceutil/trace.go:171","msg":"trace[808197773] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"104.082235ms","start":"2024-08-06T07:52:10.165072Z","end":"2024-08-06T07:52:10.269154Z","steps":["trace[808197773] 'process raft request'  (duration: 103.999362ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:52:36 up 17 min,  0 users,  load average: 0.52, 0.18, 0.08
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:51:09.609598       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:09.609738       1 main.go:299] handling current node
	I0806 07:51:19.608251       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:19.608633       1 main.go:299] handling current node
	I0806 07:51:29.610799       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:29.611016       1 main.go:299] handling current node
	I0806 07:51:39.608566       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:39.608751       1 main.go:299] handling current node
	I0806 07:51:49.609079       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:49.609255       1 main.go:299] handling current node
	I0806 07:51:59.615217       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:59.615256       1 main.go:299] handling current node
	I0806 07:52:09.608220       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:09.608290       1 main.go:299] handling current node
	I0806 07:52:09.608308       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:09.608317       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:09.608837       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0806 07:52:19.608568       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:19.608810       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:19.608997       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:19.609157       1 main.go:299] handling current node
	I0806 07:52:29.618338       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:29.618506       1 main.go:299] handling current node
	I0806 07:52:29.618578       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:29.618615       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6d93185f30a9] <==
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0806 07:51:40.593542       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52513: use of closed network connection
	E0806 07:51:40.913864       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52518: use of closed network connection
	E0806 07:51:41.219815       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52523: use of closed network connection
	E0806 07:51:44.319914       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52554: use of closed network connection
	E0806 07:51:44.505332       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52556: use of closed network connection
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:38:15.355219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355235       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:38:15.401729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.655935ms"
	I0806 07:38:15.431945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.14675ms"
	I0806 07:38:15.458535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.562482ms"
	I0806 07:38:15.458649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.614µs"
	I0806 07:38:30.766337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.896µs"
	I0806 07:38:30.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.914µs"
	I0806 07:38:31.717892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.878µs"
	I0806 07:38:31.736658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.976174ms"
	I0806 07:38:31.737084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.186µs"
	I0806 07:38:34.714007       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0806 07:39:52.487758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.078135ms"
	I0806 07:39:52.498018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.216294ms"
	I0806 07:39:52.498073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.228µs"
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	I0806 07:52:07.325935       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:52:07.342865       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:09.851060       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-100000-m03"
	I0806 07:52:30.373055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:52:30.382873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.276µs"
	I0806 07:52:30.391038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.602µs"
	I0806 07:52:32.408559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.578386ms"
	I0806 07:52:32.408616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.014µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:48:00 multinode-100000 kubelet[2051]: E0806 07:48:00.482201    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:49:00 multinode-100000 kubelet[2051]: E0806 07:49:00.485250    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:50:00 multinode-100000 kubelet[2051]: E0806 07:50:00.481450    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:51:00 multinode-100000 kubelet[2051]: E0806 07:51:00.483720    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:52:00 multinode-100000 kubelet[2051]: E0806 07:52:00.481620    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (2.77s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 node stop m03
E0806 00:52:41.408862    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 node stop m03: (8.355558916s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status: exit status 7 (250.948392ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr: exit status 7 (255.862626ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:52:46.279338    5190 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:52:46.279592    5190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:52:46.279598    5190 out.go:304] Setting ErrFile to fd 2...
	I0806 00:52:46.279606    5190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:52:46.279765    5190 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:52:46.279937    5190 out.go:298] Setting JSON to false
	I0806 00:52:46.279960    5190 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:52:46.279993    5190 notify.go:220] Checking for updates...
	I0806 00:52:46.280264    5190 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:52:46.280280    5190 status.go:255] checking status of multinode-100000 ...
	I0806 00:52:46.280631    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.280688    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.289695    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52744
	I0806 00:52:46.290076    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.290482    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.290495    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.290704    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.290830    5190 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:52:46.290919    5190 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:46.290993    5190 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:52:46.291956    5190 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:52:46.291975    5190 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:52:46.292228    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.292250    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.300798    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52746
	I0806 00:52:46.301147    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.301538    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.301561    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.301778    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.301897    5190 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:52:46.302013    5190 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:52:46.302260    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.302281    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.310791    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52748
	I0806 00:52:46.311106    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.311458    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.311474    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.311689    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.311810    5190 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:52:46.311950    5190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:46.311971    5190 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:52:46.312047    5190 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:52:46.312131    5190 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:52:46.312213    5190 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:52:46.312294    5190 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:52:46.349571    5190 ssh_runner.go:195] Run: systemctl --version
	I0806 00:52:46.354755    5190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:46.367180    5190 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:52:46.367205    5190 api_server.go:166] Checking apiserver status ...
	I0806 00:52:46.367242    5190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:52:46.378002    5190 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:52:46.384956    5190 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:52:46.384997    5190 ssh_runner.go:195] Run: ls
	I0806 00:52:46.388309    5190 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:52:46.391350    5190 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:52:46.391360    5190 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:52:46.391369    5190 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:52:46.391381    5190 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:52:46.391635    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.391654    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.400386    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52752
	I0806 00:52:46.400703    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.401050    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.401067    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.401292    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.401409    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:52:46.401497    5190 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:46.401568    5190 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:52:46.402533    5190 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:52:46.402541    5190 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:52:46.402793    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.402825    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.411215    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52754
	I0806 00:52:46.411541    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.411871    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.411884    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.412102    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.412209    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:52:46.412296    5190 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:52:46.412546    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.412570    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.421105    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52756
	I0806 00:52:46.421442    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.421795    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.421807    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.422043    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.422177    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:52:46.422304    5190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:52:46.422316    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:52:46.422390    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:52:46.422462    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:52:46.422540    5190 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:52:46.422611    5190 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:52:46.459220    5190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:52:46.469306    5190 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:52:46.469319    5190 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:52:46.469587    5190 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:52:46.469608    5190 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:52:46.478224    5190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52759
	I0806 00:52:46.478571    5190 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:52:46.478905    5190 main.go:141] libmachine: Using API Version  1
	I0806 00:52:46.478915    5190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:52:46.479129    5190 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:52:46.479240    5190 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:52:46.479326    5190 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:52:46.479402    5190 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5072
	I0806 00:52:46.480372    5190 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid 5072 missing from process table
	I0806 00:52:46.480398    5190 status.go:330] multinode-100000-m03 host status = "Stopped" (err=<nil>)
	I0806 00:52:46.480407    5190 status.go:343] host is not running, skipping remaining checks
	I0806 00:52:46.480416    5190 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr": multinode-100000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-100000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-100000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr": multinode-100000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-100000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-100000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (2.067650497s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-100000 -- rollout       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:39 PDT |                     |
	|         | status deployment/busybox            |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- sh        |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |                  |         |         |                     |                     |
	| node    | add -p multinode-100000 -v 3         | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:52 PDT |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-100000 node stop m03       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:52 PDT |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415212392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415272093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415281683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415427967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/730773bd53054521739eb2bf3731e90f06df86c05a2f2435964943abea426db3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:39:54 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619368219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619377598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619772649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              14 minutes ago      Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         14 minutes ago      Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         14 minutes ago      Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         14 minutes ago      Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         14 minutes ago      Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	[INFO] 10.244.0.3:47333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000847s
	[INFO] 10.244.0.3:41915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057433s
	[INFO] 10.244.0.3:34860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071303s
	[INFO] 10.244.0.3:46952 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000111703s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:52:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                14m                kubelet          Node multinode-100000 status is now: NodeReady
	
	
	Name:               multinode-100000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T00_52_07_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:52:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:52:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:52:37 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:52:37 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:52:37 +0000   Tue, 06 Aug 2024 07:52:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:52:37 +0000   Tue, 06 Aug 2024 07:52:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-100000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e4dd3c8067364c01aff8902f752ac959
	  System UUID:                83a944ea-0000-0000-930f-df1a6331c821
	  Boot ID:                    dc071d27-e6bc-46d1-9730-b50a8d4da1b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-6l7f2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-dn72w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      40s
	  kube-system                 kube-proxy-d9c42           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 33s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x2 over 40s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x2 over 40s)  kubelet          Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x2 over 40s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  40s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           38s                node-controller  Node multinode-100000-m03 event: Registered Node multinode-100000-m03 in Controller
	  Normal  NodeReady                17s                kubelet          Node multinode-100000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 6 07:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:56.793645Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:37:56.796498Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:37:56.796632Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T07:52:10.269202Z","caller":"traceutil/trace.go:171","msg":"trace[808197773] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"104.082235ms","start":"2024-08-06T07:52:10.165072Z","end":"2024-08-06T07:52:10.269154Z","steps":["trace[808197773] 'process raft request'  (duration: 103.999362ms)"],"step_count":1}
	
	
	==> kernel <==
	 07:52:48 up 17 min,  0 users,  load average: 0.44, 0.18, 0.08
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:51:29.610799       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:29.611016       1 main.go:299] handling current node
	I0806 07:51:39.608566       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:39.608751       1 main.go:299] handling current node
	I0806 07:51:49.609079       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:49.609255       1 main.go:299] handling current node
	I0806 07:51:59.615217       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:51:59.615256       1 main.go:299] handling current node
	I0806 07:52:09.608220       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:09.608290       1 main.go:299] handling current node
	I0806 07:52:09.608308       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:09.608317       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:09.608837       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0806 07:52:19.608568       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:19.608810       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:19.608997       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:19.609157       1 main.go:299] handling current node
	I0806 07:52:29.618338       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:29.618506       1 main.go:299] handling current node
	I0806 07:52:29.618578       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:29.618615       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	I0806 07:52:39.608721       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:52:39.608873       1 main.go:299] handling current node
	I0806 07:52:39.608944       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:52:39.608975       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6d93185f30a9] <==
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0806 07:51:40.593542       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52513: use of closed network connection
	E0806 07:51:40.913864       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52518: use of closed network connection
	E0806 07:51:41.219815       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52523: use of closed network connection
	E0806 07:51:44.319914       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52554: use of closed network connection
	E0806 07:51:44.505332       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52556: use of closed network connection
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:38:15.355219       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:38:15.355235       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:38:15.401729       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.655935ms"
	I0806 07:38:15.431945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.14675ms"
	I0806 07:38:15.458535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.562482ms"
	I0806 07:38:15.458649       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="50.614µs"
	I0806 07:38:30.766337       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="35.896µs"
	I0806 07:38:30.775206       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.914µs"
	I0806 07:38:31.717892       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="59.878µs"
	I0806 07:38:31.736658       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.976174ms"
	I0806 07:38:31.737084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.186µs"
	I0806 07:38:34.714007       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0806 07:39:52.487758       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.078135ms"
	I0806 07:39:52.498018       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.216294ms"
	I0806 07:39:52.498073       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.228µs"
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	I0806 07:52:07.325935       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:52:07.342865       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:09.851060       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-100000-m03"
	I0806 07:52:30.373055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:52:30.382873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.276µs"
	I0806 07:52:30.391038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.602µs"
	I0806 07:52:32.408559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.578386ms"
	I0806 07:52:32.408616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.014µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:48:00 multinode-100000 kubelet[2051]: E0806 07:48:00.482201    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:48:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:49:00 multinode-100000 kubelet[2051]: E0806 07:49:00.485250    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:49:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:50:00 multinode-100000 kubelet[2051]: E0806 07:50:00.481450    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:51:00 multinode-100000 kubelet[2051]: E0806 07:51:00.483720    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:52:00 multinode-100000 kubelet[2051]: E0806 07:52:00.481620    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (11.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (98.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 node start m03 -v=7 --alsologtostderr
E0806 00:53:22.347482    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 node start m03 -v=7 --alsologtostderr: (41.068050613s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (322.051155ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:30.228905    5255 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:30.229115    5255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:30.229120    5255 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:30.229124    5255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:30.229315    5255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:30.229492    5255 out.go:298] Setting JSON to false
	I0806 00:53:30.229513    5255 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:30.229552    5255 notify.go:220] Checking for updates...
	I0806 00:53:30.229833    5255 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:30.229848    5255 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:30.230205    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.230244    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.239277    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52825
	I0806 00:53:30.239680    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.240086    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.240101    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.240342    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.240453    5255 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:30.240542    5255 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:30.240618    5255 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:30.241591    5255 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:30.241612    5255 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:30.241840    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.241860    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.250298    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52827
	I0806 00:53:30.250619    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.251003    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.251023    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.251265    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.251383    5255 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:30.251470    5255 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:30.251724    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.251752    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.260282    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52829
	I0806 00:53:30.260647    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.260979    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.260997    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.261236    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.261363    5255 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:30.261534    5255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:30.261565    5255 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:30.261663    5255 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:30.261756    5255 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:30.261854    5255 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:30.261946    5255 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:30.301568    5255 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:30.306160    5255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:30.316864    5255 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:30.316889    5255 api_server.go:166] Checking apiserver status ...
	I0806 00:53:30.316929    5255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:30.328546    5255 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:30.336276    5255 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:30.336321    5255 ssh_runner.go:195] Run: ls
	I0806 00:53:30.339475    5255 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:30.342458    5255 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:30.342469    5255 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:30.342478    5255 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:30.342489    5255 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:30.342749    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.342769    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.351512    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52834
	I0806 00:53:30.351863    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.352209    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.352226    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.352450    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.352556    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:30.352639    5255 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:30.352715    5255 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:30.353669    5255 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:30.353679    5255 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:30.353920    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.353946    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.362516    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52836
	I0806 00:53:30.362841    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.363196    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.363212    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.363436    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.363551    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:30.363642    5255 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:30.363901    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.363927    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.372389    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52838
	I0806 00:53:30.372739    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.373070    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.373094    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.373290    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.373401    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:30.373531    5255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:30.373543    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:30.373626    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:30.373706    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:30.373798    5255 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:30.373876    5255 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:30.410437    5255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:30.421090    5255 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:30.421121    5255 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:30.421423    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.421447    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.430144    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52841
	I0806 00:53:30.430484    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.430824    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.430837    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.431053    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.431181    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:30.431258    5255 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:30.431350    5255 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:30.432308    5255 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:30.432318    5255 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:30.432555    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.432583    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.440965    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52843
	I0806 00:53:30.441273    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.441620    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.441637    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.441872    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.441980    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:30.442063    5255 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:30.442325    5255 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:30.442349    5255 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:30.450867    5255 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52845
	I0806 00:53:30.451205    5255 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:30.451551    5255 main.go:141] libmachine: Using API Version  1
	I0806 00:53:30.451568    5255 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:30.451769    5255 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:30.451866    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:30.451976    5255 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:30.451987    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:30.452067    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:30.452135    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:30.452246    5255 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:30.452325    5255 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:30.485337    5255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:30.495868    5255 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (317.912833ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:31.501642    5266 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:31.501828    5266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:31.501833    5266 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:31.501837    5266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:31.502008    5266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:31.502190    5266 out.go:298] Setting JSON to false
	I0806 00:53:31.502214    5266 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:31.502271    5266 notify.go:220] Checking for updates...
	I0806 00:53:31.502539    5266 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:31.502555    5266 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:31.502904    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.502948    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.511787    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52849
	I0806 00:53:31.512235    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.512662    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.512672    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.512894    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.513003    5266 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:31.513085    5266 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:31.513154    5266 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:31.514141    5266 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:31.514162    5266 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:31.514402    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.514423    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.523544    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52851
	I0806 00:53:31.523914    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.524307    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.524338    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.524572    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.524705    5266 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:31.524797    5266 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:31.525062    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.525085    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.533626    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52853
	I0806 00:53:31.533933    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.534289    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.534305    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.534529    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.534639    5266 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:31.534784    5266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:31.534804    5266 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:31.534890    5266 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:31.534972    5266 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:31.535057    5266 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:31.535136    5266 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:31.572275    5266 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:31.576736    5266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:31.587201    5266 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:31.587226    5266 api_server.go:166] Checking apiserver status ...
	I0806 00:53:31.587266    5266 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:31.597991    5266 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:31.605060    5266 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:31.605100    5266 ssh_runner.go:195] Run: ls
	I0806 00:53:31.608258    5266 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:31.611880    5266 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:31.611890    5266 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:31.611899    5266 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:31.611910    5266 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:31.612157    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.612178    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.620839    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52857
	I0806 00:53:31.621160    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.621487    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.621501    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.621726    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.621829    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:31.621910    5266 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:31.621979    5266 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:31.622964    5266 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:31.622970    5266 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:31.623212    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.623245    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.631825    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52859
	I0806 00:53:31.632212    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.632536    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.632546    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.632756    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.632866    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:31.632949    5266 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:31.633225    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.633247    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.641728    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52861
	I0806 00:53:31.642071    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.642372    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.642379    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.642625    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.642786    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:31.642912    5266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:31.642923    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:31.643002    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:31.643108    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:31.643198    5266 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:31.643280    5266 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:31.678822    5266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:31.689285    5266 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:31.689300    5266 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:31.689573    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.689596    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.698067    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52864
	I0806 00:53:31.698385    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.698732    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.698745    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.698964    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.699098    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:31.699174    5266 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:31.699258    5266 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:31.700224    5266 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:31.700234    5266 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:31.700471    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.700501    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.708851    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52866
	I0806 00:53:31.709174    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.709526    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.709538    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.709773    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.709885    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:31.709968    5266 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:31.710204    5266 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:31.710231    5266 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:31.718609    5266 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52868
	I0806 00:53:31.718947    5266 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:31.719249    5266 main.go:141] libmachine: Using API Version  1
	I0806 00:53:31.719258    5266 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:31.719451    5266 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:31.719567    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:31.719701    5266 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:31.719713    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:31.719784    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:31.719872    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:31.719941    5266 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:31.720003    5266 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:31.753016    5266 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:31.763636    5266 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (329.247436ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:33.897884    5278 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:33.898071    5278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:33.898077    5278 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:33.898080    5278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:33.898253    5278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:33.898438    5278 out.go:298] Setting JSON to false
	I0806 00:53:33.898461    5278 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:33.898502    5278 notify.go:220] Checking for updates...
	I0806 00:53:33.898751    5278 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:33.898771    5278 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:33.899147    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:33.899179    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:33.907964    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52872
	I0806 00:53:33.908326    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:33.908746    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:33.908755    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:33.908961    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:33.909102    5278 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:33.909185    5278 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:33.909263    5278 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:33.910242    5278 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:33.910262    5278 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:33.910493    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:33.910515    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:33.918870    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52874
	I0806 00:53:33.919192    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:33.919559    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:33.919589    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:33.919805    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:33.919918    5278 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:33.920009    5278 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:33.920270    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:33.920293    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:33.930177    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52876
	I0806 00:53:33.930537    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:33.930861    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:33.930870    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:33.931058    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:33.931177    5278 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:33.931317    5278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:33.931339    5278 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:33.931410    5278 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:33.931487    5278 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:33.931570    5278 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:33.931655    5278 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:33.969378    5278 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:33.973566    5278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:33.985833    5278 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:33.985859    5278 api_server.go:166] Checking apiserver status ...
	I0806 00:53:33.985894    5278 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:33.998727    5278 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:34.008990    5278 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:34.009050    5278 ssh_runner.go:195] Run: ls
	I0806 00:53:34.012298    5278 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:34.015515    5278 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:34.015526    5278 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:34.015534    5278 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:34.015550    5278 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:34.015818    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:34.015838    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:34.024717    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52880
	I0806 00:53:34.025058    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:34.025417    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:34.025434    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:34.025627    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:34.025729    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:34.025805    5278 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:34.025899    5278 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:34.026849    5278 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:34.026858    5278 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:34.027096    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:34.027125    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:34.035593    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52882
	I0806 00:53:34.035938    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:34.036311    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:34.036326    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:34.036524    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:34.036630    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:34.036712    5278 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:34.036958    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:34.036978    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:34.045379    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52884
	I0806 00:53:34.045700    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:34.046049    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:34.046072    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:34.046279    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:34.046394    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:34.046517    5278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:34.046530    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:34.046609    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:34.046687    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:34.046770    5278 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:34.046843    5278 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:34.081692    5278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:34.092193    5278 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:34.092207    5278 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:34.092505    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:34.092527    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:34.101248    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52887
	I0806 00:53:34.101594    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:34.101949    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:34.101964    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:34.102187    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:34.102292    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:34.102379    5278 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:34.102451    5278 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:34.103424    5278 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:34.103434    5278 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:34.103681    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:34.103707    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:34.112039    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52889
	I0806 00:53:34.112403    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:34.112759    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:34.112776    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:34.113012    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:34.113122    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:34.113208    5278 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:34.113480    5278 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:34.113501    5278 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:34.121922    5278 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52891
	I0806 00:53:34.122264    5278 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:34.122628    5278 main.go:141] libmachine: Using API Version  1
	I0806 00:53:34.122645    5278 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:34.122883    5278 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:34.123005    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:34.123145    5278 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:34.123158    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:34.123251    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:34.123336    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:34.123413    5278 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:34.123486    5278 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:34.157148    5278 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:34.167568    5278 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (321.386049ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:35.363551    5293 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:35.363805    5293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:35.363814    5293 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:35.363818    5293 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:35.363987    5293 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:35.364151    5293 out.go:298] Setting JSON to false
	I0806 00:53:35.364172    5293 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:35.364217    5293 notify.go:220] Checking for updates...
	I0806 00:53:35.364513    5293 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:35.364529    5293 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:35.364886    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.364943    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.373410    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52895
	I0806 00:53:35.373730    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.374138    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.374149    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.374365    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.374534    5293 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:35.374642    5293 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:35.374709    5293 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:35.375701    5293 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:35.375720    5293 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:35.375967    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.375988    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.384248    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52897
	I0806 00:53:35.384602    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.384919    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.384933    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.385139    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.385245    5293 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:35.385322    5293 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:35.385574    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.385594    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.395527    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52899
	I0806 00:53:35.395871    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.396196    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.396209    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.396389    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.396495    5293 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:35.396623    5293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:35.396643    5293 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:35.396726    5293 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:35.396802    5293 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:35.396879    5293 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:35.396968    5293 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:35.435368    5293 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:35.440127    5293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:35.451513    5293 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:35.451536    5293 api_server.go:166] Checking apiserver status ...
	I0806 00:53:35.451573    5293 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:35.463375    5293 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:35.471651    5293 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:35.471693    5293 ssh_runner.go:195] Run: ls
	I0806 00:53:35.475014    5293 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:35.478053    5293 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:35.478064    5293 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:35.478073    5293 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:35.478085    5293 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:35.478344    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.478364    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.487046    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52903
	I0806 00:53:35.487383    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.487734    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.487749    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.487984    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.488112    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:35.488216    5293 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:35.488278    5293 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:35.489291    5293 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:35.489299    5293 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:35.489553    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.489578    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.498215    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52905
	I0806 00:53:35.498578    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.498915    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.498924    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.499170    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.499300    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:35.499396    5293 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:35.499688    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.499714    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.508240    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52907
	I0806 00:53:35.508563    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.508917    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.508935    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.509117    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.509219    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:35.509333    5293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:35.509345    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:35.509419    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:35.509507    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:35.509581    5293 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:35.509645    5293 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:35.544982    5293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:35.555193    5293 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:35.555215    5293 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:35.555471    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.555494    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.563990    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52910
	I0806 00:53:35.564338    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.564685    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.564703    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.564916    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.565030    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:35.565107    5293 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:35.565189    5293 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:35.566207    5293 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:35.566217    5293 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:35.566468    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.566495    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.574819    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52912
	I0806 00:53:35.575142    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.575490    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.575508    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.575717    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.575835    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:35.575912    5293 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:35.576159    5293 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:35.576189    5293 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:35.584585    5293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52914
	I0806 00:53:35.584933    5293 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:35.585298    5293 main.go:141] libmachine: Using API Version  1
	I0806 00:53:35.585322    5293 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:35.585510    5293 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:35.585631    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:35.585759    5293 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:35.585770    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:35.585854    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:35.585930    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:35.586015    5293 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:35.586085    5293 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:35.618748    5293 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:35.628925    5293 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (331.79383ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:39.998262    5306 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:40.007540    5306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:40.007554    5306 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:40.007562    5306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:40.007933    5306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:40.008280    5306 out.go:298] Setting JSON to false
	I0806 00:53:40.008320    5306 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:40.008418    5306 notify.go:220] Checking for updates...
	I0806 00:53:40.008769    5306 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:40.008791    5306 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:40.009292    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.009362    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.018878    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52918
	I0806 00:53:40.019288    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.019729    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.019744    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.019947    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.020035    5306 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:40.020116    5306 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:40.020191    5306 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:40.021180    5306 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:40.021200    5306 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:40.021443    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.021463    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.029857    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52920
	I0806 00:53:40.030206    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.030570    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.030586    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.030801    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.030903    5306 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:40.030980    5306 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:40.031243    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.031268    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.041070    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52922
	I0806 00:53:40.041428    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.041731    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.041741    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.041984    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.042123    5306 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:40.042262    5306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:40.042282    5306 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:40.042369    5306 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:40.042458    5306 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:40.042540    5306 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:40.042621    5306 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:40.078334    5306 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:40.082790    5306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:40.094536    5306 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:40.094561    5306 api_server.go:166] Checking apiserver status ...
	I0806 00:53:40.094596    5306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:40.106422    5306 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:40.114598    5306 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:40.114637    5306 ssh_runner.go:195] Run: ls
	I0806 00:53:40.117896    5306 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:40.120920    5306 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:40.120933    5306 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:40.120942    5306 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:40.120956    5306 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:40.121204    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.121227    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.130018    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52926
	I0806 00:53:40.130350    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.130710    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.130734    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.130947    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.131064    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:40.131162    5306 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:40.131239    5306 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:40.132264    5306 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:40.132272    5306 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:40.132522    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.132544    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.141004    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52928
	I0806 00:53:40.141399    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.141750    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.141767    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.141990    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.142109    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:40.142204    5306 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:40.142467    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.142492    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.150925    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52930
	I0806 00:53:40.151277    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.151581    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.151592    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.151826    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.151951    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:40.152092    5306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:40.152105    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:40.152181    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:40.152272    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:40.152359    5306 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:40.152439    5306 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:40.188052    5306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:40.198052    5306 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:40.198067    5306 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:40.198364    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.198388    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.206955    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52933
	I0806 00:53:40.207304    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.207661    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.207684    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.207878    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.207981    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:40.208070    5306 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:40.208144    5306 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:40.209153    5306 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:40.209162    5306 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:40.209398    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.209426    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.217812    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52935
	I0806 00:53:40.218151    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.218484    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.218498    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.218688    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.218815    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:40.218902    5306 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:40.219147    5306 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:40.219172    5306 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:40.227652    5306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52937
	I0806 00:53:40.227985    5306 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:40.228343    5306 main.go:141] libmachine: Using API Version  1
	I0806 00:53:40.228361    5306 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:40.228594    5306 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:40.228727    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:40.228861    5306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:40.228871    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:40.228944    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:40.229017    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:40.229110    5306 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:40.229196    5306 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:40.263369    5306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:40.274673    5306 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (320.924776ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:44.384111    5324 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:44.384302    5324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:44.384311    5324 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:44.384315    5324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:44.384485    5324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:44.384651    5324 out.go:298] Setting JSON to false
	I0806 00:53:44.384672    5324 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:44.384715    5324 notify.go:220] Checking for updates...
	I0806 00:53:44.384958    5324 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:44.384975    5324 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:44.385328    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.385369    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.394060    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52941
	I0806 00:53:44.394478    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.394870    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.394879    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.395083    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.395182    5324 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:44.395273    5324 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:44.395348    5324 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:44.396347    5324 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:44.396368    5324 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:44.396607    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.396627    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.405273    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52943
	I0806 00:53:44.405602    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.405925    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.405937    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.406165    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.406285    5324 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:44.406361    5324 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:44.406608    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.406632    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.415218    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52945
	I0806 00:53:44.415547    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.415847    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.415856    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.416047    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.416151    5324 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:44.416278    5324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:44.416296    5324 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:44.416373    5324 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:44.416449    5324 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:44.416541    5324 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:44.416630    5324 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:44.453850    5324 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:44.458109    5324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:44.470075    5324 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:44.470104    5324 api_server.go:166] Checking apiserver status ...
	I0806 00:53:44.470144    5324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:44.482161    5324 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:44.490244    5324 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:44.490288    5324 ssh_runner.go:195] Run: ls
	I0806 00:53:44.493529    5324 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:44.496532    5324 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:44.496542    5324 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:44.496557    5324 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:44.496568    5324 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:44.496803    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.496823    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.505668    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52949
	I0806 00:53:44.506033    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.506374    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.506388    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.506603    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.506714    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:44.506791    5324 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:44.506871    5324 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:44.507856    5324 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:44.507865    5324 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:44.508129    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.508158    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.516813    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52951
	I0806 00:53:44.517140    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.517490    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.517501    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.517718    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.517833    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:44.517906    5324 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:44.518157    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.518180    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.526952    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52953
	I0806 00:53:44.527310    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.527628    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.527646    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.527851    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.527959    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:44.528086    5324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:44.528097    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:44.528180    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:44.528250    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:44.528343    5324 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:44.528433    5324 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:44.564862    5324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:44.574890    5324 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:44.574904    5324 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:44.575173    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.575194    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.583682    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52956
	I0806 00:53:44.584038    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.584352    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.584362    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.584591    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.584709    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:44.584795    5324 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:44.584872    5324 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:44.585880    5324 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:44.585889    5324 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:44.586138    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.586162    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.594672    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52958
	I0806 00:53:44.594998    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.595303    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.595312    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.595515    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.595628    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:44.595703    5324 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:44.595944    5324 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:44.595967    5324 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:44.604443    5324 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52960
	I0806 00:53:44.604809    5324 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:44.605136    5324 main.go:141] libmachine: Using API Version  1
	I0806 00:53:44.605146    5324 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:44.605335    5324 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:44.605435    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:44.605549    5324 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:44.605561    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:44.605643    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:44.605727    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:44.605805    5324 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:44.605879    5324 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:44.638449    5324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:44.649802    5324 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (321.460618ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:53:52.279251    5336 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:53:52.279415    5336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:52.279423    5336 out.go:304] Setting ErrFile to fd 2...
	I0806 00:53:52.279427    5336 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:53:52.279587    5336 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:53:52.279764    5336 out.go:298] Setting JSON to false
	I0806 00:53:52.279787    5336 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:53:52.279833    5336 notify.go:220] Checking for updates...
	I0806 00:53:52.280075    5336 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:53:52.280093    5336 status.go:255] checking status of multinode-100000 ...
	I0806 00:53:52.280443    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.280486    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.289127    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52964
	I0806 00:53:52.289460    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.289879    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.289892    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.290165    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.290305    5336 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:53:52.290406    5336 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:52.290475    5336 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:53:52.291418    5336 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:53:52.291438    5336 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:52.291692    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.291710    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.300105    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52966
	I0806 00:53:52.300458    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.300781    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.300805    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.301032    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.301140    5336 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:53:52.301215    5336 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:53:52.301462    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.301490    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.309962    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52968
	I0806 00:53:52.310308    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.310671    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.310694    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.310892    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.310981    5336 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:53:52.311115    5336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:52.311135    5336 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:53:52.311223    5336 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:53:52.311302    5336 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:53:52.311378    5336 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:53:52.311451    5336 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:53:52.349875    5336 ssh_runner.go:195] Run: systemctl --version
	I0806 00:53:52.354150    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:52.366505    5336 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:53:52.366532    5336 api_server.go:166] Checking apiserver status ...
	I0806 00:53:52.366569    5336 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:53:52.377937    5336 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:53:52.386174    5336 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:53:52.386230    5336 ssh_runner.go:195] Run: ls
	I0806 00:53:52.389196    5336 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:53:52.392225    5336 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:53:52.392236    5336 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:53:52.392245    5336 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:52.392255    5336 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:53:52.392536    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.392555    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.401286    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52972
	I0806 00:53:52.401627    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.401984    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.402001    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.402232    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.402353    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:53:52.402437    5336 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:52.402522    5336 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:53:52.403475    5336 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:53:52.403485    5336 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:52.403733    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.403761    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.412284    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52974
	I0806 00:53:52.412639    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.412983    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.412996    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.413214    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.413324    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:53:52.413402    5336 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:53:52.413664    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.413688    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.422139    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52976
	I0806 00:53:52.422523    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.422866    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.422882    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.423095    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.423206    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:53:52.423327    5336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:52.423338    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:53:52.423423    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:53:52.423500    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:53:52.423602    5336 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:53:52.423687    5336 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:53:52.458859    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:52.468808    5336 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:53:52.468823    5336 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:53:52.469092    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.469121    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.477800    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52979
	I0806 00:53:52.478123    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.478475    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.478493    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.478703    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.478809    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:53:52.478892    5336 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:53:52.478973    5336 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:53:52.479957    5336 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:53:52.479964    5336 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:52.480214    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.480241    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.488724    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52981
	I0806 00:53:52.489064    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.489399    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.489417    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.489649    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.489792    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:53:52.489879    5336 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:53:52.490154    5336 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:53:52.490176    5336 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:53:52.498623    5336 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52983
	I0806 00:53:52.498963    5336 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:53:52.499318    5336 main.go:141] libmachine: Using API Version  1
	I0806 00:53:52.499331    5336 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:53:52.499554    5336 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:53:52.499660    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:53:52.499791    5336 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:53:52.499803    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:53:52.499916    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:53:52.499998    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:53:52.500080    5336 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:53:52.500162    5336 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:53:52.533017    5336 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:53:52.544390    5336 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (322.474555ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:54:02.685125    5352 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:54:02.685292    5352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:02.685298    5352 out.go:304] Setting ErrFile to fd 2...
	I0806 00:54:02.685302    5352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:02.685490    5352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:54:02.685666    5352 out.go:298] Setting JSON to false
	I0806 00:54:02.685687    5352 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:54:02.685720    5352 notify.go:220] Checking for updates...
	I0806 00:54:02.686002    5352 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:54:02.686016    5352 status.go:255] checking status of multinode-100000 ...
	I0806 00:54:02.686403    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.686445    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.695618    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52987
	I0806 00:54:02.696040    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.696458    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.696468    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.696683    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.696785    5352 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:54:02.696876    5352 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:02.696939    5352 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:54:02.697904    5352 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:54:02.697925    5352 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:54:02.698156    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.698178    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.706587    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52989
	I0806 00:54:02.706905    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.707271    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.707287    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.707481    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.707596    5352 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:54:02.707677    5352 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:54:02.707926    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.707949    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.716313    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52991
	I0806 00:54:02.716625    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.716975    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.716990    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.717201    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.717322    5352 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:02.717461    5352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:54:02.717480    5352 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:54:02.717554    5352 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:54:02.717620    5352 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:54:02.717705    5352 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:54:02.717791    5352 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:54:02.754931    5352 ssh_runner.go:195] Run: systemctl --version
	I0806 00:54:02.759067    5352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:54:02.770963    5352 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:54:02.770986    5352 api_server.go:166] Checking apiserver status ...
	I0806 00:54:02.771023    5352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:54:02.782407    5352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:54:02.790689    5352 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:54:02.790735    5352 ssh_runner.go:195] Run: ls
	I0806 00:54:02.793999    5352 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:54:02.796918    5352 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:54:02.796929    5352 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:54:02.796941    5352 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:54:02.796952    5352 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:54:02.797202    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.797222    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.805899    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52995
	I0806 00:54:02.806245    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.806575    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.806582    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.806787    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.806916    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:54:02.807013    5352 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:02.807113    5352 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:54:02.808037    5352 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:54:02.808045    5352 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:54:02.808289    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.808324    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.816994    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52997
	I0806 00:54:02.817346    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.817661    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.817671    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.817902    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.818017    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:54:02.818099    5352 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:54:02.818361    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.818390    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.826900    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52999
	I0806 00:54:02.827232    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.827556    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.827571    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.827781    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.827902    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:54:02.828029    5352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:54:02.828041    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:54:02.828114    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:54:02.828187    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:54:02.828268    5352 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:54:02.828338    5352 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:54:02.864863    5352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:54:02.875603    5352 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:54:02.875616    5352 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:54:02.875874    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.875895    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.884777    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53002
	I0806 00:54:02.885125    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.885458    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.885468    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.885681    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.885797    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:54:02.885881    5352 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:02.885985    5352 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:54:02.886899    5352 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:54:02.886909    5352 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:54:02.887160    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.887186    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.895844    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53004
	I0806 00:54:02.896229    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.896594    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.896612    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.896802    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.896922    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:54:02.897007    5352 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:54:02.897262    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:02.897282    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:02.905720    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53006
	I0806 00:54:02.906060    5352 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:02.906382    5352 main.go:141] libmachine: Using API Version  1
	I0806 00:54:02.906391    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:02.906576    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:02.906694    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:54:02.906815    5352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:54:02.906826    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:54:02.906909    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:54:02.906986    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:54:02.907057    5352 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:54:02.907133    5352 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:54:02.940366    5352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:54:02.951620    5352 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr: exit status 2 (323.88161ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-100000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-100000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:54:25.022002    5380 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:54:25.022291    5380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:25.022296    5380 out.go:304] Setting ErrFile to fd 2...
	I0806 00:54:25.022300    5380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:25.022478    5380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:54:25.022681    5380 out.go:298] Setting JSON to false
	I0806 00:54:25.022703    5380 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:54:25.022742    5380 notify.go:220] Checking for updates...
	I0806 00:54:25.023761    5380 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:54:25.023797    5380 status.go:255] checking status of multinode-100000 ...
	I0806 00:54:25.024286    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.024331    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.033310    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53010
	I0806 00:54:25.033817    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.034242    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.034251    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.034446    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.034555    5380 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:54:25.034643    5380 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:25.034734    5380 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:54:25.035673    5380 status.go:330] multinode-100000 host status = "Running" (err=<nil>)
	I0806 00:54:25.035688    5380 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:54:25.035925    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.035958    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.044300    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53012
	I0806 00:54:25.044613    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.044953    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.044973    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.045172    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.045278    5380 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:54:25.045363    5380 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:54:25.045608    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.045629    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.055335    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53014
	I0806 00:54:25.055677    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.055989    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.055999    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.056190    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.056283    5380 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:25.056413    5380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:54:25.056437    5380 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:54:25.056515    5380 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:54:25.056586    5380 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:54:25.056668    5380 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:54:25.056760    5380 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:54:25.093016    5380 ssh_runner.go:195] Run: systemctl --version
	I0806 00:54:25.097212    5380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:54:25.108964    5380 kubeconfig.go:125] found "multinode-100000" server: "https://192.169.0.13:8443"
	I0806 00:54:25.108990    5380 api_server.go:166] Checking apiserver status ...
	I0806 00:54:25.109030    5380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:54:25.120475    5380 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup
	W0806 00:54:25.128949    5380 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1953/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:54:25.128994    5380 ssh_runner.go:195] Run: ls
	I0806 00:54:25.132106    5380 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:54:25.135809    5380 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:54:25.135820    5380 status.go:422] multinode-100000 apiserver status = Running (err=<nil>)
	I0806 00:54:25.135828    5380 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:54:25.135839    5380 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:54:25.136098    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.136121    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.144837    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53018
	I0806 00:54:25.145168    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.145555    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.145571    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.145761    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.145864    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:54:25.145949    5380 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:25.146032    5380 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:54:25.146960    5380 status.go:330] multinode-100000-m02 host status = "Running" (err=<nil>)
	I0806 00:54:25.146967    5380 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:54:25.147243    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.147266    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.155994    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53020
	I0806 00:54:25.156348    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.156698    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.156712    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.156939    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.157056    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:54:25.157153    5380 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:54:25.157415    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.157438    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.166131    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53022
	I0806 00:54:25.166476    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.166827    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.166840    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.167063    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.167184    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:54:25.167307    5380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:54:25.167325    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:54:25.167403    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:54:25.167475    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:54:25.167561    5380 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:54:25.167639    5380 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:54:25.204061    5380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:54:25.214655    5380 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:54:25.214676    5380 status.go:255] checking status of multinode-100000-m03 ...
	I0806 00:54:25.214977    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.214999    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.223594    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53025
	I0806 00:54:25.223949    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.224285    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.224302    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.224499    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.224615    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:54:25.224701    5380 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:25.224784    5380 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:54:25.225746    5380 status.go:330] multinode-100000-m03 host status = "Running" (err=<nil>)
	I0806 00:54:25.225755    5380 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:54:25.225996    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.226022    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.234613    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53027
	I0806 00:54:25.234917    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.235242    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.235261    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.235484    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.235599    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:54:25.235681    5380 host.go:66] Checking if "multinode-100000-m03" exists ...
	I0806 00:54:25.235938    5380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:25.235963    5380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:25.244504    5380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53029
	I0806 00:54:25.244844    5380 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:25.245158    5380 main.go:141] libmachine: Using API Version  1
	I0806 00:54:25.245179    5380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:25.245388    5380 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:25.245508    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:54:25.245634    5380 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:54:25.245645    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:54:25.245731    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:54:25.245831    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:54:25.245916    5380 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:54:25.245993    5380 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:54:25.279207    5380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:54:25.290194    5380 status.go:257] multinode-100000-m03 status: &{Name:multinode-100000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-100000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (1.977243899s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:49 PDT | 06 Aug 24 00:49 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- sh        |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |                  |         |         |                     |                     |
	| node    | add -p multinode-100000 -v 3         | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:52 PDT |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-100000 node stop m03       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:52 PDT |
	| node    | multinode-100000 node start          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:53 PDT |
	|         | m03 -v=7 --alsologtostderr           |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:35:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:35:32.676325    4292 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:35:32.676601    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676607    4292 out.go:304] Setting ErrFile to fd 2...
	I0806 00:35:32.676610    4292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:35:32.676768    4292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:35:32.678248    4292 out.go:298] Setting JSON to false
	I0806 00:35:32.700659    4292 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2094,"bootTime":1722927638,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:35:32.700749    4292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:35:32.723275    4292 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:35:32.765686    4292 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:35:32.765838    4292 notify.go:220] Checking for updates...
	I0806 00:35:32.808341    4292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:35:32.829496    4292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:35:32.850407    4292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:35:32.871672    4292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:32.892641    4292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:35:32.913945    4292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:35:32.944520    4292 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 00:35:32.986143    4292 start.go:297] selected driver: hyperkit
	I0806 00:35:32.986161    4292 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:35:32.986176    4292 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:35:32.989717    4292 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:32.989824    4292 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:35:32.998218    4292 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:35:33.002169    4292 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.002189    4292 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:35:33.002223    4292 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:35:33.002423    4292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:35:33.002481    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:35:33.002490    4292 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0806 00:35:33.002502    4292 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 00:35:33.002569    4292 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:35:33.002652    4292 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:35:33.044508    4292 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:35:33.065219    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:35:33.065293    4292 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:35:33.065354    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:35:33.065635    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:35:33.065654    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:35:33.066173    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:33.066211    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json: {Name:mk72349cbf3074da6761af52b168e673548f3ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:35:33.066817    4292 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:35:33.066922    4292 start.go:364] duration metric: took 85.684µs to acquireMachinesLock for "multinode-100000"
	I0806 00:35:33.066972    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:35:33.067065    4292 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 00:35:33.088582    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:35:33.088841    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:35:33.088907    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:35:33.098805    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52410
	I0806 00:35:33.099159    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:35:33.099600    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:35:33.099614    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:35:33.099818    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:35:33.099943    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:33.100033    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:33.100130    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:35:33.100152    4292 client.go:168] LocalClient.Create starting
	I0806 00:35:33.100189    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:35:33.100243    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100257    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100320    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:35:33.100359    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:35:33.100370    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:35:33.100382    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:35:33.100392    4292 main.go:141] libmachine: (multinode-100000) Calling .PreCreateCheck
	I0806 00:35:33.100485    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.100635    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:33.109837    4292 main.go:141] libmachine: Creating machine...
	I0806 00:35:33.109854    4292 main.go:141] libmachine: (multinode-100000) Calling .Create
	I0806 00:35:33.110025    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.110277    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.110022    4300 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:35:33.110418    4292 main.go:141] libmachine: (multinode-100000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:35:33.295827    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.295690    4300 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa...
	I0806 00:35:33.502634    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.502493    4300 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk...
	I0806 00:35:33.502655    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing magic tar header
	I0806 00:35:33.502665    4292 main.go:141] libmachine: (multinode-100000) DBG | Writing SSH key tar header
	I0806 00:35:33.503537    4292 main.go:141] libmachine: (multinode-100000) DBG | I0806 00:35:33.503390    4300 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000 ...
	I0806 00:35:33.877390    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.877412    4292 main.go:141] libmachine: (multinode-100000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:35:33.877424    4292 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:35:33.988705    4292 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:35:33.988725    4292 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:35:33.988759    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988793    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000aa330)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:35:33.988839    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:35:33.988870    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:35:33.988893    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:35:33.991956    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 DEBUG: hyperkit: Pid is 4303
	I0806 00:35:33.992376    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:35:33.992391    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:33.992446    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:33.993278    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:33.993360    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:33.993380    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:33.993405    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:33.993424    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:33.993437    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:33.993449    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:33.993464    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:33.993498    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:33.993520    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:33.993540    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:33.993552    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:33.993562    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:33.999245    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:35:34.053136    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:35:34.053714    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.053737    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.053746    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.053754    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.433368    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:35:34.433384    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:35:34.548018    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:35:34.548040    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:35:34.548066    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:35:34.548085    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:35:34.548944    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:35:34.548954    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:35:35.995149    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 1
	I0806 00:35:35.995163    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:35.995266    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:35.996054    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:35.996094    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:35.996108    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:35.996132    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:35.996169    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:35.996185    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:35.996200    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:35.996223    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:35.996236    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:35.996250    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:35.996258    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:35.996265    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:35.996272    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:37.997721    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 2
	I0806 00:35:37.997737    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:37.997833    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:37.998751    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:37.998796    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:37.998808    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:37.998817    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:37.998824    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:37.998834    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:37.998843    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:37.998850    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:37.998857    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:37.998872    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:37.998885    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:37.998906    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:37.998915    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.000050    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 3
	I0806 00:35:40.000064    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:40.000167    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:40.000922    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:40.000982    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:40.000992    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:40.001002    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:40.001009    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:40.001016    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:40.001021    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:40.001028    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:40.001034    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:40.001051    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:40.001065    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:40.001075    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:40.001092    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:40.125670    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:35:40.125726    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:35:40.125735    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:35:40.149566    4292 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:35:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:35:42.001968    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 4
	I0806 00:35:42.001983    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:42.002066    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:42.002835    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:42.002890    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0806 00:35:42.002900    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:35:42.002909    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:35:42.002917    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:35:42.002940    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:35:42.002948    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:35:42.002955    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:35:42.002964    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:35:42.002970    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:35:42.002978    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:35:42.002985    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:35:42.002996    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:35:44.004662    4292 main.go:141] libmachine: (multinode-100000) DBG | Attempt 5
	I0806 00:35:44.004678    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.004700    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.005526    4292 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:35:44.005569    4292 main.go:141] libmachine: (multinode-100000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:35:44.005581    4292 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:35:44.005591    4292 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:35:44.005619    4292 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:35:44.005700    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:44.006323    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006428    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:44.006524    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:35:44.006537    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:35:44.006634    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:35:44.006694    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:35:44.007476    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:35:44.007487    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:35:44.007493    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:35:44.007498    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:44.007591    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:44.007674    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007764    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:44.007853    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:44.007987    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:44.008184    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:44.008192    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:35:45.076448    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.076465    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:35:45.076471    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.076624    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.076724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076819    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.076915    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.077045    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.077189    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.077197    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:35:45.144548    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:35:45.144591    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:35:45.144598    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:35:45.144603    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144740    4292 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:35:45.144749    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.144843    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.144938    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.145034    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145124    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.145213    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.145351    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.145492    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.145501    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:35:45.223228    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:35:45.223249    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.223379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.223481    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223570    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.223660    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.223790    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.223939    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.223951    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:35:45.292034    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:35:45.292059    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:35:45.292078    4292 buildroot.go:174] setting up certificates
	I0806 00:35:45.292089    4292 provision.go:84] configureAuth start
	I0806 00:35:45.292095    4292 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:35:45.292225    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:45.292323    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.292419    4292 provision.go:143] copyHostCerts
	I0806 00:35:45.292449    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292512    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:35:45.292520    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:35:45.292668    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:35:45.292900    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.292931    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:35:45.292935    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:35:45.293022    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:35:45.293179    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293218    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:35:45.293223    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:35:45.293307    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:35:45.293461    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:35:45.520073    4292 provision.go:177] copyRemoteCerts
	I0806 00:35:45.520131    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:35:45.520149    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.520304    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.520400    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.520492    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.520588    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:45.562400    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:35:45.562481    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:35:45.581346    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:35:45.581402    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:35:45.600722    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:35:45.600779    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:35:45.620152    4292 provision.go:87] duration metric: took 328.044128ms to configureAuth
	I0806 00:35:45.620167    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:35:45.620308    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:35:45.620324    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:45.620480    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.620572    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.620655    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620746    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.620832    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.620951    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.621092    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.621099    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:35:45.688009    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:35:45.688025    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:35:45.688103    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:35:45.688116    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.688258    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.688371    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.688579    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.688745    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.688882    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.688931    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:35:45.766293    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:35:45.766319    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:45.766466    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:45.766564    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766645    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:45.766724    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:45.766843    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:45.766987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:45.766999    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:35:47.341714    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:35:47.341733    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:35:47.341750    4292 main.go:141] libmachine: (multinode-100000) Calling .GetURL
	I0806 00:35:47.341889    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:35:47.341898    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:35:47.341902    4292 client.go:171] duration metric: took 14.241464585s to LocalClient.Create
	I0806 00:35:47.341919    4292 start.go:167] duration metric: took 14.241510649s to libmachine.API.Create "multinode-100000"
	I0806 00:35:47.341930    4292 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:35:47.341937    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:35:47.341947    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.342092    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:35:47.342105    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.342199    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.342285    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.342379    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.342467    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.382587    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:35:47.385469    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:35:47.385477    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:35:47.385481    4292 command_runner.go:130] > ID=buildroot
	I0806 00:35:47.385485    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:35:47.385489    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:35:47.385581    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:35:47.385594    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:35:47.385696    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:35:47.385887    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:35:47.385903    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:35:47.386118    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:35:47.394135    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:35:47.413151    4292 start.go:296] duration metric: took 71.212336ms for postStartSetup
	I0806 00:35:47.413177    4292 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:35:47.413783    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.413932    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:35:47.414265    4292 start.go:128] duration metric: took 14.346903661s to createHost
	I0806 00:35:47.414279    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.414369    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.414451    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414534    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.414620    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.414723    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:35:47.414850    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:35:47.414859    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:35:47.480376    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929747.524109427
	
	I0806 00:35:47.480388    4292 fix.go:216] guest clock: 1722929747.524109427
	I0806 00:35:47.480393    4292 fix.go:229] Guest: 2024-08-06 00:35:47.524109427 -0700 PDT Remote: 2024-08-06 00:35:47.414273 -0700 PDT m=+14.774098631 (delta=109.836427ms)
	I0806 00:35:47.480413    4292 fix.go:200] guest clock delta is within tolerance: 109.836427ms
	I0806 00:35:47.480416    4292 start.go:83] releasing machines lock for "multinode-100000", held for 14.413201307s
	I0806 00:35:47.480435    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.480582    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:35:47.480686    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481025    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481144    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:35:47.481220    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:35:47.481250    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481279    4292 ssh_runner.go:195] Run: cat /version.json
	I0806 00:35:47.481291    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:35:47.481352    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481353    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:35:47.481449    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481463    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:35:47.481541    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481556    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:35:47.481638    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.481653    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:35:47.582613    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:35:47.583428    4292 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:35:47.583596    4292 ssh_runner.go:195] Run: systemctl --version
	I0806 00:35:47.588843    4292 command_runner.go:130] > systemd 252 (252)
	I0806 00:35:47.588866    4292 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:35:47.588920    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:35:47.593612    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:35:47.593639    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:35:47.593687    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:35:47.607350    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:35:47.607480    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:35:47.607494    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.607588    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.622260    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:35:47.622586    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:35:47.631764    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:35:47.640650    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:35:47.640704    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:35:47.649724    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.658558    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:35:47.667341    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:35:47.677183    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:35:47.686281    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:35:47.695266    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:35:47.704014    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:35:47.712970    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:35:47.720743    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:35:47.720841    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:35:47.728846    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:47.828742    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:35:47.848191    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:35:47.848271    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:35:47.862066    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:35:47.862604    4292 command_runner.go:130] > [Unit]
	I0806 00:35:47.862619    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:35:47.862625    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:35:47.862630    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:35:47.862634    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:35:47.862642    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:35:47.862646    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:35:47.862663    4292 command_runner.go:130] > [Service]
	I0806 00:35:47.862670    4292 command_runner.go:130] > Type=notify
	I0806 00:35:47.862674    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:35:47.862696    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:35:47.862704    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:35:47.862710    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:35:47.862716    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:35:47.862724    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:35:47.862731    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:35:47.862742    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:35:47.862756    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:35:47.862768    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:35:47.862789    4292 command_runner.go:130] > ExecStart=
	I0806 00:35:47.862803    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:35:47.862808    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:35:47.862814    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:35:47.862820    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:35:47.862826    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:35:47.862831    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:35:47.862835    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:35:47.862840    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:35:47.862847    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:35:47.862852    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:35:47.862857    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:35:47.862864    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:35:47.862869    4292 command_runner.go:130] > Delegate=yes
	I0806 00:35:47.862875    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:35:47.862880    4292 command_runner.go:130] > KillMode=process
	I0806 00:35:47.862885    4292 command_runner.go:130] > [Install]
	I0806 00:35:47.862897    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:35:47.862957    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.874503    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:35:47.888401    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:35:47.899678    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.910858    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:35:47.935194    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:35:47.946319    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:35:47.961240    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:35:47.961509    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:35:47.964405    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:35:47.964539    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:35:47.972571    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:35:47.986114    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:35:48.089808    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:35:48.189821    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:35:48.189902    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:35:48.205371    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:35:48.305180    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:35:50.610688    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.305442855s)
	I0806 00:35:50.610744    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:35:50.621917    4292 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:37:45.085447    4292 ssh_runner.go:235] Completed: sudo systemctl stop cri-docker.socket: (1m54.461245771s)
	I0806 00:37:45.085519    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.097196    4292 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:37:45.197114    4292 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:37:45.292406    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.391129    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:37:45.405046    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:37:45.416102    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:45.533604    4292 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:37:45.589610    4292 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:37:45.589706    4292 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:37:45.594037    4292 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:37:45.594049    4292 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:37:45.594054    4292 command_runner.go:130] > Device: 0,22	Inode: 805         Links: 1
	I0806 00:37:45.594060    4292 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:37:45.594064    4292 command_runner.go:130] > Access: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594069    4292 command_runner.go:130] > Modify: 2024-08-06 07:37:45.625216614 +0000
	I0806 00:37:45.594073    4292 command_runner.go:130] > Change: 2024-08-06 07:37:45.627215775 +0000
	I0806 00:37:45.594076    4292 command_runner.go:130] >  Birth: -
	I0806 00:37:45.594117    4292 start.go:563] Will wait 60s for crictl version
	I0806 00:37:45.594161    4292 ssh_runner.go:195] Run: which crictl
	I0806 00:37:45.596956    4292 command_runner.go:130] > /usr/bin/crictl
	I0806 00:37:45.597171    4292 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:37:45.621060    4292 command_runner.go:130] > Version:  0.1.0
	I0806 00:37:45.621116    4292 command_runner.go:130] > RuntimeName:  docker
	I0806 00:37:45.621195    4292 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:37:45.621265    4292 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:37:45.622461    4292 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:37:45.622524    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.639748    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.640898    4292 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:37:45.659970    4292 command_runner.go:130] > 27.1.1
	I0806 00:37:45.682623    4292 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:37:45.682654    4292 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:37:45.682940    4292 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:37:45.686120    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:45.696475    4292 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:37:45.696537    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:37:45.696591    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:45.709358    4292 docker.go:685] Got preloaded images: 
	I0806 00:37:45.709371    4292 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0806 00:37:45.709415    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:45.717614    4292 command_runner.go:139] > {"Repositories":{}}
	I0806 00:37:45.717741    4292 ssh_runner.go:195] Run: which lz4
	I0806 00:37:45.720684    4292 command_runner.go:130] > /usr/bin/lz4
	I0806 00:37:45.720774    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0806 00:37:45.720887    4292 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0806 00:37:45.723901    4292 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.723990    4292 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0806 00:37:45.724007    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0806 00:37:46.617374    4292 docker.go:649] duration metric: took 896.51057ms to copy over tarball
	I0806 00:37:46.617438    4292 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0806 00:37:48.962709    4292 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.345209203s)
	I0806 00:37:48.962723    4292 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0806 00:37:48.989708    4292 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0806 00:37:48.998314    4292 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0806 00:37:48.998434    4292 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0806 00:37:49.011940    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:49.104996    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:37:51.441428    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.336367372s)
	I0806 00:37:51.441504    4292 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:37:51.454654    4292 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:37:51.454669    4292 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:37:51.454674    4292 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:37:51.454682    4292 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:37:51.454686    4292 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:37:51.454690    4292 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:37:51.454695    4292 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:37:51.454700    4292 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:37:51.455392    4292 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0806 00:37:51.455409    4292 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:37:51.455420    4292 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:37:51.455506    4292 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:37:51.455578    4292 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:37:51.493148    4292 command_runner.go:130] > cgroupfs
	I0806 00:37:51.493761    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:37:51.493770    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:37:51.493779    4292 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:37:51.493799    4292 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:37:51.493886    4292 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:37:51.493946    4292 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:37:51.501517    4292 command_runner.go:130] > kubeadm
	I0806 00:37:51.501524    4292 command_runner.go:130] > kubectl
	I0806 00:37:51.501527    4292 command_runner.go:130] > kubelet
	I0806 00:37:51.501670    4292 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:37:51.501712    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:37:51.509045    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:37:51.522572    4292 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:37:51.535791    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:37:51.549550    4292 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:37:51.552639    4292 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:37:51.562209    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:37:51.657200    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:37:51.669303    4292 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:37:51.669315    4292 certs.go:194] generating shared ca certs ...
	I0806 00:37:51.669325    4292 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.669518    4292 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:37:51.669593    4292 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:37:51.669606    4292 certs.go:256] generating profile certs ...
	I0806 00:37:51.669656    4292 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:37:51.669668    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt with IP's: []
	I0806 00:37:51.792624    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt ...
	I0806 00:37:51.792639    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt: {Name:mk8667fc194de8cf8fded4f6b0b716fe105f94fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.792981    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key ...
	I0806 00:37:51.792989    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key: {Name:mk5693609b0c83eb3bce2eae7a5d8211445280d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.793215    4292 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:37:51.793229    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0806 00:37:51.926808    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec ...
	I0806 00:37:51.926818    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec: {Name:mk977e2f365dba4e3b0587a998566fa4d7926493 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927069    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec ...
	I0806 00:37:51.927078    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec: {Name:mkdef83341ea7ae5698bd9e2d60c39f8cd2a4e46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:51.927285    4292 certs.go:381] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt
	I0806 00:37:51.927484    4292 certs.go:385] copying /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec -> /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key
	I0806 00:37:51.927653    4292 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:37:51.927669    4292 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt with IP's: []
	I0806 00:37:52.088433    4292 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt ...
	I0806 00:37:52.088444    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt: {Name:mkc673b9a3bc6652ddb14f333f9d124c615a6826 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088718    4292 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key ...
	I0806 00:37:52.088726    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key: {Name:mkf7f90929aa11855cc285630f5ad4bb575ccae4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:37:52.088945    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:37:52.088974    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:37:52.088995    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:37:52.089015    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:37:52.089034    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:37:52.089054    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:37:52.089072    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:37:52.089091    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:37:52.089188    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:37:52.089246    4292 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:37:52.089257    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:37:52.089300    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:37:52.089366    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:37:52.089422    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:37:52.089542    4292 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:37:52.089590    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.089613    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.089632    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.090046    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:37:52.111710    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:37:52.131907    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:37:52.151479    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:37:52.171693    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:37:52.191484    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:37:52.211176    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:37:52.230802    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:37:52.250506    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:37:52.270606    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:37:52.290275    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:37:52.309237    4292 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:37:52.323119    4292 ssh_runner.go:195] Run: openssl version
	I0806 00:37:52.327113    4292 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:37:52.327315    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:37:52.335532    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338816    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338844    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.338901    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:37:52.343016    4292 command_runner.go:130] > 3ec20f2e
	I0806 00:37:52.343165    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:37:52.351433    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:37:52.362210    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368669    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368937    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.368987    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:37:52.373469    4292 command_runner.go:130] > b5213941
	I0806 00:37:52.373704    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:37:52.384235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:37:52.395305    4292 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400212    4292 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400421    4292 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.400474    4292 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:37:52.406136    4292 command_runner.go:130] > 51391683
	I0806 00:37:52.406235    4292 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:37:52.415464    4292 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:37:52.418597    4292 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418637    4292 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:37:52.418680    4292 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:37:52.418767    4292 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:37:52.431331    4292 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:37:52.439651    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0806 00:37:52.439663    4292 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0806 00:37:52.439684    4292 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0806 00:37:52.439814    4292 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:37:52.447838    4292 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:37:52.455844    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:37:52.455854    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:37:52.455860    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:37:52.455865    4292 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455878    4292 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:37:52.455884    4292 kubeadm.go:157] found existing configuration files:
	
	I0806 00:37:52.455917    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:37:52.463564    4292 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463581    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:37:52.463638    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:37:52.471500    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:37:52.479060    4292 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479083    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:37:52.479115    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:37:52.487038    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.494658    4292 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494678    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:37:52.494715    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:37:52.502699    4292 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:37:52.510396    4292 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510413    4292 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:37:52.510448    4292 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:37:52.518459    4292 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0806 00:37:52.582551    4292 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582567    4292 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0806 00:37:52.582622    4292 kubeadm.go:310] [preflight] Running pre-flight checks
	I0806 00:37:52.582630    4292 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:37:52.670948    4292 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.670966    4292 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0806 00:37:52.671056    4292 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671068    4292 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0806 00:37:52.671166    4292 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.671175    4292 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0806 00:37:52.840152    4292 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.840173    4292 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:37:52.860448    4292 out.go:204]   - Generating certificates and keys ...
	I0806 00:37:52.860515    4292 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:37:52.860522    4292 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0806 00:37:52.860574    4292 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:52.860578    4292 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0806 00:37:53.262704    4292 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.262716    4292 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0806 00:37:53.357977    4292 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.357990    4292 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0806 00:37:53.460380    4292 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.460383    4292 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0806 00:37:53.557795    4292 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.557804    4292 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0806 00:37:53.672961    4292 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.672972    4292 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0806 00:37:53.673143    4292 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.673153    4292 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823821    4292 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823828    4292 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0806 00:37:53.823935    4292 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.823943    4292 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-100000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0806 00:37:53.907043    4292 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:53.907053    4292 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0806 00:37:54.170203    4292 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.170215    4292 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0806 00:37:54.232963    4292 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0806 00:37:54.232976    4292 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0806 00:37:54.233108    4292 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.233115    4292 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:37:54.560300    4292 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.560310    4292 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:37:54.689503    4292 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.689520    4292 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:37:54.772704    4292 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.772714    4292 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:37:54.901757    4292 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:54.901770    4292 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:37:55.057967    4292 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.057987    4292 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:37:55.058372    4292 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.058381    4292 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:37:55.060093    4292 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.060100    4292 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:37:55.081494    4292 out.go:204]   - Booting up control plane ...
	I0806 00:37:55.081559    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081566    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:37:55.081622    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081627    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:37:55.081688    4292 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081706    4292 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:37:55.081835    4292 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081836    4292 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:37:55.081921    4292 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081928    4292 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:37:55.081962    4292 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0806 00:37:55.081972    4292 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:37:55.190382    4292 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190382    4292 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0806 00:37:55.190467    4292 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.190474    4292 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:37:55.692270    4292 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692288    4292 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.007026ms
	I0806 00:37:55.692374    4292 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:55.692383    4292 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0806 00:37:59.693684    4292 kubeadm.go:310] [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.693693    4292 command_runner.go:130] > [api-check] The API server is healthy after 4.003026548s
	I0806 00:37:59.705633    4292 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.705646    4292 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 00:37:59.720099    4292 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.720109    4292 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 00:37:59.738249    4292 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738275    4292 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0806 00:37:59.738423    4292 kubeadm.go:310] [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.738434    4292 command_runner.go:130] > [mark-control-plane] Marking the node multinode-100000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 00:37:59.745383    4292 kubeadm.go:310] [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.745397    4292 command_runner.go:130] > [bootstrap-token] Using token: vbomjh.qsf72loo4zgv06fc
	I0806 00:37:59.783358    4292 out.go:204]   - Configuring RBAC rules ...
	I0806 00:37:59.783539    4292 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.783560    4292 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 00:37:59.785907    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.785948    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 00:37:59.826999    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.827006    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 00:37:59.829623    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.829627    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 00:37:59.832217    4292 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.832231    4292 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 00:37:59.834614    4292 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:37:59.834628    4292 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 00:38:00.099434    4292 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.099444    4292 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 00:38:00.510267    4292 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 00:38:00.510286    4292 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:38:01.098516    4292 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.098535    4292 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:38:01.099426    4292 kubeadm.go:310] 
	I0806 00:38:01.099476    4292 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099482    4292 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0806 00:38:01.099485    4292 kubeadm.go:310] 
	I0806 00:38:01.099571    4292 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099579    4292 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0806 00:38:01.099583    4292 kubeadm.go:310] 
	I0806 00:38:01.099621    4292 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 00:38:01.099627    4292 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0806 00:38:01.099685    4292 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099692    4292 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 00:38:01.099737    4292 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099742    4292 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 00:38:01.099758    4292 kubeadm.go:310] 
	I0806 00:38:01.099805    4292 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099811    4292 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0806 00:38:01.099816    4292 kubeadm.go:310] 
	I0806 00:38:01.099868    4292 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099874    4292 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 00:38:01.099878    4292 kubeadm.go:310] 
	I0806 00:38:01.099924    4292 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 00:38:01.099932    4292 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0806 00:38:01.099998    4292 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100012    4292 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 00:38:01.100083    4292 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100088    4292 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 00:38:01.100092    4292 kubeadm.go:310] 
	I0806 00:38:01.100168    4292 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100177    4292 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 00:38:01.100245    4292 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0806 00:38:01.100249    4292 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 00:38:01.100256    4292 kubeadm.go:310] 
	I0806 00:38:01.100330    4292 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100335    4292 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100422    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100428    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 00:38:01.100450    4292 command_runner.go:130] > 	--control-plane 
	I0806 00:38:01.100454    4292 kubeadm.go:310] 	--control-plane 
	I0806 00:38:01.100465    4292 kubeadm.go:310] 
	I0806 00:38:01.100533    4292 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100538    4292 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 00:38:01.100545    4292 kubeadm.go:310] 
	I0806 00:38:01.100605    4292 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100610    4292 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vbomjh.qsf72loo4zgv06fc \
	I0806 00:38:01.100694    4292 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.100703    4292 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:38:01.101330    4292 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101334    4292 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:38:01.101354    4292 cni.go:84] Creating CNI manager for ""
	I0806 00:38:01.101361    4292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0806 00:38:01.123627    4292 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:38:01.196528    4292 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:38:01.201237    4292 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:38:01.201250    4292 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:38:01.201255    4292 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:38:01.201260    4292 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:38:01.201265    4292 command_runner.go:130] > Access: 2024-08-06 07:35:44.089192446 +0000
	I0806 00:38:01.201275    4292 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:38:01.201282    4292 command_runner.go:130] > Change: 2024-08-06 07:35:42.019366338 +0000
	I0806 00:38:01.201285    4292 command_runner.go:130] >  Birth: -
	I0806 00:38:01.201457    4292 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:38:01.201465    4292 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:38:01.217771    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:38:01.451925    4292 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451939    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0806 00:38:01.451946    4292 command_runner.go:130] > serviceaccount/kindnet created
	I0806 00:38:01.451949    4292 command_runner.go:130] > daemonset.apps/kindnet created
	I0806 00:38:01.451970    4292 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:38:01.452056    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.452057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000 minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=true
	I0806 00:38:01.610233    4292 command_runner.go:130] > node/multinode-100000 labeled
	I0806 00:38:01.611382    4292 command_runner.go:130] > -16
	I0806 00:38:01.611408    4292 ops.go:34] apiserver oom_adj: -16
	I0806 00:38:01.611436    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0806 00:38:01.611535    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:01.673352    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.112700    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.170574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:02.612824    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:02.681015    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.112860    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.173114    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:03.612060    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:03.674241    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.174075    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:04.613016    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:04.675523    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.112239    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.171613    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:05.611863    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:05.672963    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.112009    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.167728    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:06.613273    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:06.670554    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.113057    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.167700    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:07.613035    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:07.675035    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.113568    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.177386    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:08.611850    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:08.669063    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.113472    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.173560    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:09.613780    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:09.676070    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.112109    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.172674    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:10.613930    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:10.669788    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.112032    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.178288    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:11.612564    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:11.681621    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.112219    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.169314    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:12.612581    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:12.670247    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.113181    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.172574    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:13.613362    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:13.672811    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.112553    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.177904    4292 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0806 00:38:14.612414    4292 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 00:38:14.708737    4292 command_runner.go:130] > NAME      SECRETS   AGE
	I0806 00:38:14.708751    4292 command_runner.go:130] > default   0         0s
	I0806 00:38:14.710041    4292 kubeadm.go:1113] duration metric: took 13.257790627s to wait for elevateKubeSystemPrivileges
	I0806 00:38:14.710058    4292 kubeadm.go:394] duration metric: took 22.29094538s to StartCluster
	I0806 00:38:14.710072    4292 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710182    4292 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.710733    4292 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:38:14.710987    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 00:38:14.710992    4292 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:38:14.711032    4292 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:38:14.711084    4292 addons.go:69] Setting storage-provisioner=true in profile "multinode-100000"
	I0806 00:38:14.711092    4292 addons.go:69] Setting default-storageclass=true in profile "multinode-100000"
	I0806 00:38:14.711119    4292 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-100000"
	I0806 00:38:14.711121    4292 addons.go:234] Setting addon storage-provisioner=true in "multinode-100000"
	I0806 00:38:14.711168    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.711168    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:14.711516    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711537    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.711593    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.711618    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.720676    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52433
	I0806 00:38:14.721047    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52435
	I0806 00:38:14.721245    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721337    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.721602    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721612    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721697    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.721714    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.721841    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721914    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.721953    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.722073    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.722146    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.722387    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.722420    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.724119    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:14.724644    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:14.725326    4292 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:38:14.725514    4292 addons.go:234] Setting addon default-storageclass=true in "multinode-100000"
	I0806 00:38:14.725534    4292 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:38:14.725758    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.725781    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.731505    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52437
	I0806 00:38:14.731883    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.732214    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.732225    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.732427    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.732542    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.732646    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.732716    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.733688    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.734469    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52439
	I0806 00:38:14.749366    4292 out.go:177] * Verifying Kubernetes components...
	I0806 00:38:14.750086    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.771676    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.771692    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.771908    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.772346    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:14.772371    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:14.781133    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52441
	I0806 00:38:14.781487    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:14.781841    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:14.781857    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:14.782071    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:14.782186    4292 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:38:14.782264    4292 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:14.782343    4292 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:38:14.783274    4292 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:38:14.783391    4292 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:14.783400    4292 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 00:38:14.783408    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.783487    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.783566    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.783647    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.783724    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.807507    4292 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:38:14.814402    4292 command_runner.go:130] > apiVersion: v1
	I0806 00:38:14.814414    4292 command_runner.go:130] > data:
	I0806 00:38:14.814417    4292 command_runner.go:130] >   Corefile: |
	I0806 00:38:14.814421    4292 command_runner.go:130] >     .:53 {
	I0806 00:38:14.814427    4292 command_runner.go:130] >         errors
	I0806 00:38:14.814434    4292 command_runner.go:130] >         health {
	I0806 00:38:14.814462    4292 command_runner.go:130] >            lameduck 5s
	I0806 00:38:14.814467    4292 command_runner.go:130] >         }
	I0806 00:38:14.814470    4292 command_runner.go:130] >         ready
	I0806 00:38:14.814475    4292 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0806 00:38:14.814479    4292 command_runner.go:130] >            pods insecure
	I0806 00:38:14.814483    4292 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0806 00:38:14.814491    4292 command_runner.go:130] >            ttl 30
	I0806 00:38:14.814494    4292 command_runner.go:130] >         }
	I0806 00:38:14.814498    4292 command_runner.go:130] >         prometheus :9153
	I0806 00:38:14.814502    4292 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0806 00:38:14.814511    4292 command_runner.go:130] >            max_concurrent 1000
	I0806 00:38:14.814515    4292 command_runner.go:130] >         }
	I0806 00:38:14.814519    4292 command_runner.go:130] >         cache 30
	I0806 00:38:14.814522    4292 command_runner.go:130] >         loop
	I0806 00:38:14.814527    4292 command_runner.go:130] >         reload
	I0806 00:38:14.814530    4292 command_runner.go:130] >         loadbalance
	I0806 00:38:14.814541    4292 command_runner.go:130] >     }
	I0806 00:38:14.814545    4292 command_runner.go:130] > kind: ConfigMap
	I0806 00:38:14.814548    4292 command_runner.go:130] > metadata:
	I0806 00:38:14.814555    4292 command_runner.go:130] >   creationTimestamp: "2024-08-06T07:38:00Z"
	I0806 00:38:14.814559    4292 command_runner.go:130] >   name: coredns
	I0806 00:38:14.814563    4292 command_runner.go:130] >   namespace: kube-system
	I0806 00:38:14.814566    4292 command_runner.go:130] >   resourceVersion: "257"
	I0806 00:38:14.814570    4292 command_runner.go:130] >   uid: d8fd854e-ee58-4cd2-8723-2418b89b5dc3
	I0806 00:38:14.814679    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 00:38:14.866135    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:14.866436    4292 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:14.866454    4292 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 00:38:14.866500    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:38:14.866990    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:38:14.867164    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:38:14.867290    4292 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:38:14.867406    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:38:14.872742    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 00:38:15.241341    4292 command_runner.go:130] > configmap/coredns replaced
	I0806 00:38:15.242685    4292 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 00:38:15.242758    4292 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:38:15.242961    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.243148    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.243392    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.243400    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.243407    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.243411    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.256678    4292 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0806 00:38:15.256695    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.256702    4292 round_trippers.go:580]     Audit-Id: c7c6b1c0-d638-405d-9826-1613f9442124
	I0806 00:38:15.256715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.256719    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.256721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.256724    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.256731    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.256734    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.256762    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257109    4292 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"385","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.257149    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.257157    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.257163    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.257166    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.257169    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.263818    4292 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0806 00:38:15.263831    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.263837    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.263840    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.263843    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.263846    4292 round_trippers.go:580]     Audit-Id: fc5baf31-13f0-4c94-a234-c9583698bc4a
	I0806 00:38:15.263849    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.263853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.263856    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.263869    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"387","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.288440    4292 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 00:38:15.316986    4292 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0806 00:38:15.318339    4292 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:38:15.318523    4292 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x126711a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:38:15.318703    4292 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:15.318752    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.318757    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.318762    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.318766    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.318890    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.318897    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319084    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319089    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319096    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319104    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.319113    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.319239    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.319249    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.319298    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0806 00:38:15.319296    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.319304    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.319313    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.319316    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328466    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328478    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328484    4292 round_trippers.go:580]     Content-Length: 1273
	I0806 00:38:15.328487    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328490    4292 round_trippers.go:580]     Audit-Id: 55117bdb-b1b1-4b1d-a091-1eb3d07a9569
	I0806 00:38:15.328493    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328498    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328501    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328521    4292 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"396"},"items":[{"metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0806 00:38:15.328567    4292 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0806 00:38:15.328581    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.328586    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.328590    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.328593    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.328596    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.328599    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.328602    4292 round_trippers.go:580]     Audit-Id: 7ce70ed0-47c9-432d-8e5b-ac52e38e59a7
	I0806 00:38:15.328766    4292 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.328802    4292 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0806 00:38:15.328808    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.328813    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.328818    4292 round_trippers.go:473]     Content-Type: application/json
	I0806 00:38:15.328820    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.330337    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:15.340216    4292 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0806 00:38:15.340231    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.340236    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.340243    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.340247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.340251    4292 round_trippers.go:580]     Content-Length: 1220
	I0806 00:38:15.340254    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.340257    4292 round_trippers.go:580]     Audit-Id: 6dc8b90a-612f-4331-8c4e-911fcb5e8b97
	I0806 00:38:15.340261    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.340479    4292 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"db2316a9-24ea-47df-bf39-03322fc9a8eb","resourceVersion":"396","creationTimestamp":"2024-08-06T07:38:15Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-06T07:38:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0806 00:38:15.340564    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.340574    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.340728    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.340739    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.340746    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.606405    4292 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0806 00:38:15.610350    4292 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0806 00:38:15.615396    4292 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.619891    4292 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0806 00:38:15.627349    4292 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0806 00:38:15.635206    4292 command_runner.go:130] > pod/storage-provisioner created
	I0806 00:38:15.636675    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636686    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636830    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.636833    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636843    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636852    4292 main.go:141] libmachine: Making call to close driver server
	I0806 00:38:15.636857    4292 main.go:141] libmachine: (multinode-100000) Calling .Close
	I0806 00:38:15.636972    4292 main.go:141] libmachine: Successfully made call to close driver server
	I0806 00:38:15.636980    4292 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 00:38:15.636995    4292 main.go:141] libmachine: (multinode-100000) DBG | Closing plugin on server side
	I0806 00:38:15.660876    4292 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0806 00:38:15.681735    4292 addons.go:510] duration metric: took 970.696783ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0806 00:38:15.744023    4292 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0806 00:38:15.744043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.744049    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.744053    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.745471    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.745481    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.745486    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.745489    4292 round_trippers.go:580]     Audit-Id: 2e02dd3c-4368-4363-aef8-c54cb00d4e41
	I0806 00:38:15.745492    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.745495    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.745497    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.745500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.745503    4292 round_trippers.go:580]     Content-Length: 291
	I0806 00:38:15.745519    4292 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"a7f2b260-b404-47f8-94a7-9444b4d2e65d","resourceVersion":"399","creationTimestamp":"2024-08-06T07:38:00Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0806 00:38:15.745572    4292 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-100000" context rescaled to 1 replicas
	I0806 00:38:15.820125    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:15.820137    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:15.820143    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:15.820145    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:15.821478    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:15.821488    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:15.821495    4292 round_trippers.go:580]     Audit-Id: 2538e82b-a5b8-4cce-b67d-49b0a0cc6ccb
	I0806 00:38:15.821499    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:15.821504    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:15.821509    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:15.821513    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:15.821517    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:15 GMT
	I0806 00:38:15.821699    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.318995    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.319022    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.319044    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.319050    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.321451    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.321466    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.321473    4292 round_trippers.go:580]     Audit-Id: 6d358883-b606-4bf9-b02f-6cb3dcc86ebb
	I0806 00:38:16.321478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.321482    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.321507    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.321515    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.321519    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.321636    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:16.819864    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:16.819880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:16.819887    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:16.819892    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:16.822003    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:16.822013    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:16.822019    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:16.822032    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:16 GMT
	I0806 00:38:16.822039    4292 round_trippers.go:580]     Audit-Id: 688c294c-2ec1-4257-9ae2-31048566e1a5
	I0806 00:38:16.822042    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:16.822045    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:16.822048    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:16.822127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.319875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.319887    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.319893    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.319898    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.324202    4292 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:38:17.324219    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.324228    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.324233    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.324237    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.324247    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.324251    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.324253    4292 round_trippers.go:580]     Audit-Id: 3cbcad32-1d66-4480-8eea-e0ba3baeb718
	I0806 00:38:17.324408    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:17.324668    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:17.818929    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:17.818941    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:17.818948    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:17.818952    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:17.820372    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:17.820383    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:17.820390    4292 round_trippers.go:580]     Audit-Id: 1b64d2ad-91d1-49c6-8964-cd044f7ab24f
	I0806 00:38:17.820395    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:17.820400    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:17.820404    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:17.820407    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:17.820409    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:17 GMT
	I0806 00:38:17.820562    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.318915    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.318928    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.318934    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.318937    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.320383    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.320392    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.320396    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.320400    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.320403    4292 round_trippers.go:580]     Audit-Id: b404a6ee-15b9-4e15-b8f8-4cd9324a513d
	I0806 00:38:18.320405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.320408    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.320411    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.320536    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:18.819634    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:18.819647    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:18.819654    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:18.819657    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:18.821628    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:18.821635    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:18.821639    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:18 GMT
	I0806 00:38:18.821643    4292 round_trippers.go:580]     Audit-Id: 12545d9e-2520-4675-8957-dd291bc1d252
	I0806 00:38:18.821646    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:18.821649    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:18.821651    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:18.821654    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:18.821749    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.319242    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.319258    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.319264    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.319267    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.320611    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:19.320621    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.320627    4292 round_trippers.go:580]     Audit-Id: a9b124b2-ff49-4d7d-961a-c4a1b6b3e4ab
	I0806 00:38:19.320630    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.320632    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.320635    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.320639    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.320642    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.320781    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.820342    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:19.820371    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:19.820428    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:19.820437    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:19.823221    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:19.823242    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:19.823252    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:19.823258    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:19.823266    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:19.823272    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:19 GMT
	I0806 00:38:19.823291    4292 round_trippers.go:580]     Audit-Id: 9330a785-b406-42d7-a74c-e80b34311e1a
	I0806 00:38:19.823302    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:19.823409    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:19.823671    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:20.319027    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.319043    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.319051    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.319056    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.320812    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:20.320821    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.320827    4292 round_trippers.go:580]     Audit-Id: 1d9840bb-ba8b-45f8-852f-8ef7f645c8bd
	I0806 00:38:20.320830    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.320832    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.320835    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.320838    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.320841    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.321034    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:20.819543    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:20.819566    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:20.819578    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:20.819585    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:20.822277    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:20.822293    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:20.822300    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:20.822303    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:20.822307    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:20 GMT
	I0806 00:38:20.822310    4292 round_trippers.go:580]     Audit-Id: 6a96712c-fdd2-4036-95c0-27109366b2b5
	I0806 00:38:20.822313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:20.822332    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:20.822436    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.319938    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.320061    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.320076    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.320084    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.322332    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.322343    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.322350    4292 round_trippers.go:580]     Audit-Id: b6796df6-8c9c-475a-b9c2-e73edb1c0720
	I0806 00:38:21.322355    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.322359    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.322362    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.322366    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.322370    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.322503    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:21.819349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:21.819372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:21.819383    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:21.819388    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:21.821890    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:21.821905    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:21.821912    4292 round_trippers.go:580]     Audit-Id: 89b2a861-f5a0-43e4-9d3f-01f7230eecc8
	I0806 00:38:21.821916    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:21.821920    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:21.821923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:21.821927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:21.821931    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:21 GMT
	I0806 00:38:21.822004    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.320544    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.320565    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.320576    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.320581    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.322858    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.322872    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.322879    4292 round_trippers.go:580]     Audit-Id: 70ae59be-bf9a-4c7a-9fb8-93ea504768fb
	I0806 00:38:22.322885    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.322888    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.322891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.322895    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.322897    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.323158    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:22.323412    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:22.819095    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:22.819114    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:22.819126    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:22.819132    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:22.821284    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:22.821297    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:22.821307    4292 round_trippers.go:580]     Audit-Id: 1c5d80ab-21c3-4733-bd98-f4c681e0fe0e
	I0806 00:38:22.821313    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:22.821318    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:22.821321    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:22.821324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:22.821334    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:22 GMT
	I0806 00:38:22.821552    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.319478    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.319500    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.319518    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.319524    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.322104    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.322124    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.322132    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.322137    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.322143    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.322146    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.322156    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.322161    4292 round_trippers.go:580]     Audit-Id: 5276d3f7-64a0-4983-b60c-4943cbdfd74f
	I0806 00:38:23.322305    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:23.819102    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:23.819121    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:23.819130    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:23.819135    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:23.821174    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:23.821208    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:23.821216    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:23.821222    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:23.821227    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:23.821230    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:23.821241    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:23 GMT
	I0806 00:38:23.821254    4292 round_trippers.go:580]     Audit-Id: 9a86a309-2e1e-4b43-9975-baf4a0c93f44
	I0806 00:38:23.821483    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.320265    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.320287    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.320299    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.320305    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.323064    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.323097    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.323123    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.323140    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.323149    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.323178    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.323185    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.323196    4292 round_trippers.go:580]     Audit-Id: b0ef4ff1-b4d6-4fd5-870c-46b9f544b517
	I0806 00:38:24.323426    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:24.323675    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:24.819060    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:24.819080    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:24.819097    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:24.819136    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:24.821377    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:24.821390    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:24.821397    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:24 GMT
	I0806 00:38:24.821402    4292 round_trippers.go:580]     Audit-Id: b050183e-0245-4d40-9972-e2dd2be24181
	I0806 00:38:24.821405    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:24.821409    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:24.821413    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:24.821418    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:24.821619    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.319086    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.319102    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.319110    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.319114    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.321127    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.321149    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.321154    4292 round_trippers.go:580]     Audit-Id: b27c2996-2cfb-4ec2-83b6-49df62cf6805
	I0806 00:38:25.321177    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.321180    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.321184    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.321186    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.321194    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.321259    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:25.820656    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:25.820678    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:25.820689    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:25.820695    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:25.823182    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:25.823194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:25.823205    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:25.823210    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:25.823213    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:25.823216    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:25.823219    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:25 GMT
	I0806 00:38:25.823222    4292 round_trippers.go:580]     Audit-Id: e11f3fd5-b1c3-44c0-931c-e7172ae35765
	I0806 00:38:25.823311    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.320693    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.320710    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.320717    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.320721    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.322330    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:26.322339    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.322344    4292 round_trippers.go:580]     Audit-Id: 0c372b78-f3b7-43f2-a7aa-6ec405f17ce3
	I0806 00:38:26.322347    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.322350    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.322353    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.322363    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.322366    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.322578    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.820921    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:26.820948    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:26.820966    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:26.820972    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:26.823698    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:26.823713    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:26.823723    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:26.823730    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:26.823739    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:26 GMT
	I0806 00:38:26.823757    4292 round_trippers.go:580]     Audit-Id: e8e852a8-07b7-455b-8f5b-ff9801610b22
	I0806 00:38:26.823766    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:26.823770    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:26.824211    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:26.824465    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:27.321232    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.321253    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.321265    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.321270    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.324530    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:27.324543    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.324550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.324554    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.324566    4292 round_trippers.go:580]     Audit-Id: 4a0b2d15-d15f-46de-8b4a-13a9d4121efd
	I0806 00:38:27.324572    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.324578    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.324583    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.324732    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:27.820148    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:27.820170    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:27.820181    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:27.820186    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:27.822835    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:27.822859    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:27.823023    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:27.823030    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:27.823033    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:27.823038    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:27.823046    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:27 GMT
	I0806 00:38:27.823049    4292 round_trippers.go:580]     Audit-Id: 77dd4240-18e0-49c7-8881-ae5df446f885
	I0806 00:38:27.823127    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.319391    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.319412    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.319423    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.319431    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.321889    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.321906    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.321916    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.321923    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.321927    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.321930    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.321933    4292 round_trippers.go:580]     Audit-Id: d4ff4fc8-d53b-4307-82a0-9a61164b0b18
	I0806 00:38:28.321937    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.322088    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:28.819334    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:28.819362    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:28.819374    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:28.819385    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:28.821814    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:28.821826    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:28.821833    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:28.821838    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:28.821843    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:28.821847    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:28.821851    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:28 GMT
	I0806 00:38:28.821855    4292 round_trippers.go:580]     Audit-Id: 9a79b284-c2c3-4adb-9d74-73805465144b
	I0806 00:38:28.821988    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.320103    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.320120    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.320128    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.320134    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.321966    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:29.321980    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.321987    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.322000    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.322005    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.322008    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.322020    4292 round_trippers.go:580]     Audit-Id: 749bcf9b-24c9-4fac-99d8-ad9e961b1897
	I0806 00:38:29.322024    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.322094    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:29.322341    4292 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:38:29.819722    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:29.819743    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:29.819752    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:29.819760    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:29.822636    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:29.822668    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:29.822700    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:29.822711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:29.822721    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:29.822735    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:29 GMT
	I0806 00:38:29.822748    4292 round_trippers.go:580]     Audit-Id: 5408f9b5-fba3-4495-a0b7-9791cf82019c
	I0806 00:38:29.822773    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:29.822903    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.320349    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.320370    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.320380    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.320385    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.322518    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.322531    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.322538    4292 round_trippers.go:580]     Audit-Id: 1df1df85-a25c-4470-876a-7b00620c8f9b
	I0806 00:38:30.322543    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.322546    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.322550    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.322553    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.322558    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.322794    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"352","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0806 00:38:30.820065    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.820087    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.820099    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.820111    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.822652    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.822673    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.822683    4292 round_trippers.go:580]     Audit-Id: 0926ae78-d98d-44a5-8489-5522ccd95503
	I0806 00:38:30.822689    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.822695    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.822700    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.822706    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.822713    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.823032    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:30.823315    4292 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:38:30.823329    4292 node_ready.go:38] duration metric: took 15.504306549s for node "multinode-100000" to be "Ready" ...
	I0806 00:38:30.823341    4292 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:30.823387    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:30.823395    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.823403    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.823407    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.825747    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:30.825756    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.825761    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.825764    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.825768    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.825770    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.825773    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.825775    4292 round_trippers.go:580]     Audit-Id: f1883856-a563-4d68-a4ed-7bface4b980a
	I0806 00:38:30.827206    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"432"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0806 00:38:30.829456    4292 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:30.829498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:30.829503    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.829508    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.829512    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.830675    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.830684    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.830691    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.830696    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.830704    4292 round_trippers.go:580]     Audit-Id: f42eab96-6adf-4fcb-9345-e180ca00b73d
	I0806 00:38:30.830715    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.830718    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.830720    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.830856    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:30.831092    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:30.831099    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:30.831105    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:30.831107    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:30.832184    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:30.832191    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:30.832197    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:30.832203    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:30.832207    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:30.832212    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:30.832218    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:30 GMT
	I0806 00:38:30.832226    4292 round_trippers.go:580]     Audit-Id: d34ccfc2-089c-4010-b991-cc425a2b2446
	I0806 00:38:30.832371    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.329830    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.329844    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.329850    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.329854    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.331738    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.331767    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.331789    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.331808    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.331813    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.331817    4292 round_trippers.go:580]     Audit-Id: 32294b1b-fd5c-43f7-9851-1c5e5d04c3d9
	I0806 00:38:31.331820    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.331823    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.331921    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"431","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0806 00:38:31.332207    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.332215    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.332221    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.332225    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.333311    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.333324    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.333331    4292 round_trippers.go:580]     Audit-Id: a8b9458e-7f48-4e61-9daf-b2c4a52b1285
	I0806 00:38:31.333336    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.333342    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.333347    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.333351    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.333369    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.333493    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.830019    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:38:31.830040    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.830057    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.830063    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.832040    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.832055    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.832062    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.832068    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.832072    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.832076    4292 round_trippers.go:580]     Audit-Id: eae85e40-d774-4e35-8513-1a20542ce5f5
	I0806 00:38:31.832079    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.832082    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.832316    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0806 00:38:31.832691    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.832701    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.832710    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.832715    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.833679    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.833688    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.833694    4292 round_trippers.go:580]     Audit-Id: ecd49a1b-eb24-4191-89bb-5cb071fd543a
	I0806 00:38:31.833699    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.833702    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.833711    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.833714    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.833717    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.833906    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.834082    4292 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.834093    4292 pod_ready.go:81] duration metric: took 1.004604302s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834101    4292 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.834131    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:38:31.834136    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.834141    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.834145    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.835126    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.835134    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.835139    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.835144    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.835147    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.835152    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.835155    4292 round_trippers.go:580]     Audit-Id: 8f3355e7-ed89-4a5c-9ef4-3f319a0b7eef
	I0806 00:38:31.835157    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.835289    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"333","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0806 00:38:31.835498    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.835505    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.835510    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.835514    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.836524    4292 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:38:31.836533    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.836539    4292 round_trippers.go:580]     Audit-Id: a9fdb4f7-31e3-48e4-b5f3-023b2c5e4bab
	I0806 00:38:31.836547    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.836553    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.836556    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.836562    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.836568    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.836674    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.836837    4292 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.836847    4292 pod_ready.go:81] duration metric: took 2.741532ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836854    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.836883    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:38:31.836888    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.836894    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.836898    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.837821    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.837830    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.837836    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.837840    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.837844    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.837846    4292 round_trippers.go:580]     Audit-Id: 32a7a6c7-72cf-4b7f-8f80-7ebb5aaaf666
	I0806 00:38:31.837850    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.837853    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.838003    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"331","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0806 00:38:31.838230    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.838237    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.838243    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.838247    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.839014    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.839023    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.839030    4292 round_trippers.go:580]     Audit-Id: 7f28e0f4-8551-4462-aec2-766b8d2482cb
	I0806 00:38:31.839036    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.839040    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.839042    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.839045    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.839048    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.839181    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.839335    4292 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.839345    4292 pod_ready.go:81] duration metric: took 2.482949ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839352    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.839378    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:38:31.839383    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.839388    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.839392    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.840298    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.840305    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.840310    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.840313    4292 round_trippers.go:580]     Audit-Id: cf384588-551f-4b8a-b13b-1adda6dff10a
	I0806 00:38:31.840317    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.840320    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.840324    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.840328    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.840495    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"329","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0806 00:38:31.840707    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:31.840714    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.840719    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.840722    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841465    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.841471    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.841476    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.841481    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.841487    4292 round_trippers.go:580]     Audit-Id: 9a301694-659b-414d-8736-740501267c17
	I0806 00:38:31.841491    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.841496    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.841500    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.841678    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:31.841830    4292 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:31.841836    4292 pod_ready.go:81] duration metric: took 2.479787ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841842    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:31.841875    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:38:31.841880    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:31.841885    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:31.841890    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:31.842875    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:31.842883    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:31.842888    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:31.842891    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:31.842895    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:31 GMT
	I0806 00:38:31.842898    4292 round_trippers.go:580]     Audit-Id: 9e07db72-d867-47d3-adbc-514b547e8978
	I0806 00:38:31.842901    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:31.842904    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:31.843113    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"403","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0806 00:38:32.021239    4292 request.go:629] Waited for 177.889914ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021360    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.021372    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.021384    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.021390    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.024288    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.024309    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.024318    4292 round_trippers.go:580]     Audit-Id: d85fbd21-5256-48bd-b92b-10eb012d9c7a
	I0806 00:38:32.024322    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.024327    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.024331    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.024336    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.024339    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.024617    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.024865    4292 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.024877    4292 pod_ready.go:81] duration metric: took 183.025974ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.024887    4292 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.222202    4292 request.go:629] Waited for 197.196804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222252    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:38:32.222260    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.222284    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.222291    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.225758    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.225776    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.225783    4292 round_trippers.go:580]     Audit-Id: 9c5c96d8-55ee-43bd-b8fe-af3b79432f55
	I0806 00:38:32.225788    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.225791    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.225797    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.225800    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.225803    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.225862    4292 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"332","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0806 00:38:32.420759    4292 request.go:629] Waited for 194.652014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420927    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:38:32.420938    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.420949    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.420955    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.423442    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.423460    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.423471    4292 round_trippers.go:580]     Audit-Id: 04a6ba1a-a35c-4d8b-a087-80f9206646b4
	I0806 00:38:32.423478    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.423483    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.423488    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.423493    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.423499    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.423791    4292 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0806 00:38:32.424052    4292 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:38:32.424064    4292 pod_ready.go:81] duration metric: took 399.162309ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:38:32.424073    4292 pod_ready.go:38] duration metric: took 1.600692444s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:38:32.424096    4292 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:38:32.424160    4292 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:38:32.436813    4292 command_runner.go:130] > 1953
	I0806 00:38:32.436840    4292 api_server.go:72] duration metric: took 17.725484476s to wait for apiserver process to appear ...
	I0806 00:38:32.436849    4292 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:38:32.436863    4292 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:38:32.440364    4292 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:38:32.440399    4292 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:38:32.440404    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.440410    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.440421    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.440928    4292 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:38:32.440937    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.440942    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.440946    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.440950    4292 round_trippers.go:580]     Content-Length: 263
	I0806 00:38:32.440953    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.440959    4292 round_trippers.go:580]     Audit-Id: c1a3bf62-d4bb-49fe-bb9c-6619b1793ab6
	I0806 00:38:32.440962    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.440965    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.440976    4292 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:38:32.441018    4292 api_server.go:141] control plane version: v1.30.3
	I0806 00:38:32.441028    4292 api_server.go:131] duration metric: took 4.174407ms to wait for apiserver health ...
	I0806 00:38:32.441033    4292 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:38:32.620918    4292 request.go:629] Waited for 179.84972ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620960    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:32.620982    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.620988    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.620992    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.623183    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:32.623194    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.623199    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.623202    4292 round_trippers.go:580]     Audit-Id: 7febd61d-780d-47b6-884a-fdaf22170934
	I0806 00:38:32.623206    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.623211    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.623217    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.623221    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.623596    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:32.624861    4292 system_pods.go:59] 8 kube-system pods found
	I0806 00:38:32.624876    4292 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:32.624880    4292 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:32.624883    4292 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:32.624886    4292 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:32.624890    4292 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:32.624895    4292 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:32.624897    4292 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:32.624900    4292 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:32.624904    4292 system_pods.go:74] duration metric: took 183.863815ms to wait for pod list to return data ...
	I0806 00:38:32.624911    4292 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:38:32.821065    4292 request.go:629] Waited for 196.088199ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821123    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:38:32.821132    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:32.821146    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:32.821153    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:32.824169    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:32.824185    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:32.824192    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:32.824198    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:32.824203    4292 round_trippers.go:580]     Content-Length: 261
	I0806 00:38:32.824207    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:32 GMT
	I0806 00:38:32.824210    4292 round_trippers.go:580]     Audit-Id: da9e49d4-6671-4b25-a056-32b71af0fb45
	I0806 00:38:32.824214    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:32.824217    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:32.824230    4292 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:38:32.824397    4292 default_sa.go:45] found service account: "default"
	I0806 00:38:32.824409    4292 default_sa.go:55] duration metric: took 199.488573ms for default service account to be created ...
	I0806 00:38:32.824419    4292 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:38:33.021550    4292 request.go:629] Waited for 197.072106ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021720    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:38:33.021731    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.021741    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.021779    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.025126    4292 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:38:33.025143    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.025150    4292 round_trippers.go:580]     Audit-Id: e38b20d4-b38f-40c8-9e18-7f94f8f63289
	I0806 00:38:33.025155    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.025161    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.025166    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.025173    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.025177    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.025737    4292 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"446","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0806 00:38:33.027034    4292 system_pods.go:86] 8 kube-system pods found
	I0806 00:38:33.027043    4292 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:38:33.027047    4292 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:38:33.027050    4292 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:38:33.027054    4292 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:38:33.027057    4292 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:38:33.027060    4292 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:38:33.027066    4292 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:38:33.027069    4292 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:38:33.027074    4292 system_pods.go:126] duration metric: took 202.645822ms to wait for k8s-apps to be running ...
	I0806 00:38:33.027081    4292 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:38:33.027147    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:38:33.038782    4292 system_svc.go:56] duration metric: took 11.697186ms WaitForService to wait for kubelet
	I0806 00:38:33.038797    4292 kubeadm.go:582] duration metric: took 18.327429775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:38:33.038809    4292 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:38:33.220593    4292 request.go:629] Waited for 181.736174ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220673    4292 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:38:33.220683    4292 round_trippers.go:469] Request Headers:
	I0806 00:38:33.220694    4292 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:38:33.220703    4292 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:38:33.223131    4292 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:38:33.223147    4292 round_trippers.go:577] Response Headers:
	I0806 00:38:33.223155    4292 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:38:33 GMT
	I0806 00:38:33.223160    4292 round_trippers.go:580]     Audit-Id: c7a766de-973c-44db-9b8e-eb7ce291fdca
	I0806 00:38:33.223172    4292 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:38:33.223177    4292 round_trippers.go:580]     Content-Type: application/json
	I0806 00:38:33.223182    4292 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:38:33.223222    4292 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:38:33.223296    4292 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"450"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"426","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0806 00:38:33.223576    4292 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:38:33.223592    4292 node_conditions.go:123] node cpu capacity is 2
	I0806 00:38:33.223604    4292 node_conditions.go:105] duration metric: took 184.787012ms to run NodePressure ...
	I0806 00:38:33.223614    4292 start.go:241] waiting for startup goroutines ...
	I0806 00:38:33.223627    4292 start.go:246] waiting for cluster config update ...
	I0806 00:38:33.223640    4292 start.go:255] writing updated cluster config ...
	I0806 00:38:33.244314    4292 out.go:177] 
	I0806 00:38:33.265217    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:33.265273    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.287112    4292 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:38:33.345022    4292 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:38:33.345057    4292 cache.go:56] Caching tarball of preloaded images
	I0806 00:38:33.345244    4292 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:38:33.345262    4292 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:38:33.345351    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:33.346110    4292 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:38:33.346217    4292 start.go:364] duration metric: took 84.997µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:38:33.346243    4292 start.go:93] Provisioning new machine with config: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:38:33.346328    4292 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0806 00:38:33.367079    4292 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0806 00:38:33.367208    4292 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:38:33.367236    4292 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:38:33.376938    4292 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52447
	I0806 00:38:33.377289    4292 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:38:33.377644    4292 main.go:141] libmachine: Using API Version  1
	I0806 00:38:33.377655    4292 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:38:33.377842    4292 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:38:33.377956    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:33.378049    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:33.378167    4292 start.go:159] libmachine.API.Create for "multinode-100000" (driver="hyperkit")
	I0806 00:38:33.378183    4292 client.go:168] LocalClient.Create starting
	I0806 00:38:33.378211    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 00:38:33.378259    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378273    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378324    4292 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 00:38:33.378363    4292 main.go:141] libmachine: Decoding PEM data...
	I0806 00:38:33.378372    4292 main.go:141] libmachine: Parsing certificate...
	I0806 00:38:33.378386    4292 main.go:141] libmachine: Running pre-create checks...
	I0806 00:38:33.378391    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .PreCreateCheck
	I0806 00:38:33.378464    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.378493    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:33.388269    4292 main.go:141] libmachine: Creating machine...
	I0806 00:38:33.388286    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .Create
	I0806 00:38:33.388457    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:33.388692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.388444    4424 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:38:33.388794    4292 main.go:141] libmachine: (multinode-100000-m02) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 00:38:33.588443    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.588344    4424 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa...
	I0806 00:38:33.635329    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635211    4424 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk...
	I0806 00:38:33.635352    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing magic tar header
	I0806 00:38:33.635368    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Writing SSH key tar header
	I0806 00:38:33.635773    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | I0806 00:38:33.635735    4424 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02 ...
	I0806 00:38:34.046661    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.046692    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:38:34.046795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:38:34.072180    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:38:34.072206    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:38:34.072252    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00011a450)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:38:34.072340    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:38:34.072382    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:38:34.072394    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:38:34.075231    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 DEBUG: hyperkit: Pid is 4427
	I0806 00:38:34.076417    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:38:34.076438    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:34.076502    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:34.077372    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:34.077449    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:34.077468    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:34.077497    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:34.077509    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:34.077532    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:34.077550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:34.077560    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:34.077570    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:34.077578    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:34.077587    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:34.077606    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:34.077631    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:34.077647    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:34.082964    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:38:34.092078    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:38:34.092798    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.092819    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.092831    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.092850    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.480770    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:38:34.480795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:38:34.595499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:38:34.595518    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:38:34.595530    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:38:34.595538    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:38:34.596350    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:38:34.596362    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:38:36.077787    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 1
	I0806 00:38:36.077803    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:36.077889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:36.078719    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:36.078768    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:36.078779    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:36.078796    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:36.078805    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:36.078813    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:36.078820    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:36.078827    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:36.078837    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:36.078843    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:36.078849    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:36.078864    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:36.078881    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:36.078889    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:38.079369    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 2
	I0806 00:38:38.079385    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:38.079432    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:38.080212    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:38.080262    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:38.080273    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:38.080290    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:38.080296    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:38.080303    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:38.080310    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:38.080318    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:38.080325    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:38.080339    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:38.080355    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:38.080367    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:38.080376    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:38.080384    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.081876    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 3
	I0806 00:38:40.081892    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:40.081903    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:40.082774    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:40.082801    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:40.082812    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:40.082846    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:40.082873    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:40.082900    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:40.082918    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:40.082931    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:40.082940    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:40.082950    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:40.082966    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:40.082978    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:40.082987    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:40.082995    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:40.179725    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:38:40.179781    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:38:40.179795    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:38:40.203197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:38:40 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:38:42.084360    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 4
	I0806 00:38:42.084374    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:42.084499    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:42.085281    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:42.085335    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0806 00:38:42.085343    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:38:42.085351    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 00:38:42.085358    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 00:38:42.085365    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 00:38:42.085371    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 00:38:42.085378    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 00:38:42.085386    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 00:38:42.085402    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 00:38:42.085414    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 00:38:42.085433    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 00:38:42.085441    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 00:38:42.085450    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 00:38:44.085602    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 5
	I0806 00:38:44.085628    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.085697    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.086496    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:38:44.086550    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0806 00:38:44.086561    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:38:44.086569    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:38:44.086577    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:38:44.086637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:44.087855    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.087962    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:44.088059    4292 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 00:38:44.088068    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:38:44.088141    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:38:44.088197    4292 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:38:44.089006    4292 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 00:38:44.089014    4292 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 00:38:44.089023    4292 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 00:38:44.089029    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:44.089111    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:44.089190    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089273    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:44.089354    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:44.089473    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:44.089664    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:44.089672    4292 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 00:38:45.153792    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.153806    4292 main.go:141] libmachine: Detecting the provisioner...
	I0806 00:38:45.153811    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.153942    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.154043    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154169    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.154275    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.154425    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.154571    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.154581    4292 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 00:38:45.217564    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 00:38:45.217637    4292 main.go:141] libmachine: found compatible host: buildroot
	I0806 00:38:45.217648    4292 main.go:141] libmachine: Provisioning with buildroot...
	I0806 00:38:45.217668    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217807    4292 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:38:45.217817    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.217917    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.218023    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.218107    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218194    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.218285    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.218407    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.218557    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.218566    4292 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:38:45.293086    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:38:45.293102    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.293254    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.293346    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293437    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.293522    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.293658    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.293798    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.293811    4292 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:38:45.363408    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:38:45.363423    4292 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:38:45.363450    4292 buildroot.go:174] setting up certificates
	I0806 00:38:45.363457    4292 provision.go:84] configureAuth start
	I0806 00:38:45.363465    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:38:45.363605    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:45.363709    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.363796    4292 provision.go:143] copyHostCerts
	I0806 00:38:45.363827    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.363873    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:38:45.363879    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:38:45.364378    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:38:45.364592    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364623    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:38:45.364628    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:38:45.364717    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:38:45.364875    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.364915    4292 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:38:45.364920    4292 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:38:45.365034    4292 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:38:45.365183    4292 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:38:45.437744    4292 provision.go:177] copyRemoteCerts
	I0806 00:38:45.437791    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:38:45.437806    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.437948    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.438040    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.438126    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.438207    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:45.477030    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:38:45.477105    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:38:45.496899    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:38:45.496965    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:38:45.516273    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:38:45.516341    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:38:45.536083    4292 provision.go:87] duration metric: took 172.615051ms to configureAuth
	I0806 00:38:45.536096    4292 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:38:45.536221    4292 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:38:45.536234    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:45.536380    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.536470    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.536563    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536650    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.536733    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.536861    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.536987    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.536994    4292 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:38:45.599518    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:38:45.599531    4292 buildroot.go:70] root file system type: tmpfs
	I0806 00:38:45.599626    4292 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:38:45.599637    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.599779    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.599891    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.599996    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.600086    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.600232    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.600374    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.600420    4292 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:38:45.674942    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:38:45.674960    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:45.675092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:45.675165    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675259    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:45.675344    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:45.675469    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:45.675602    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:45.675614    4292 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:38:47.211811    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:38:47.211826    4292 main.go:141] libmachine: Checking connection to Docker...
	I0806 00:38:47.211840    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetURL
	I0806 00:38:47.211985    4292 main.go:141] libmachine: Docker is up and running!
	I0806 00:38:47.211993    4292 main.go:141] libmachine: Reticulating splines...
	I0806 00:38:47.212004    4292 client.go:171] duration metric: took 13.833536596s to LocalClient.Create
	I0806 00:38:47.212016    4292 start.go:167] duration metric: took 13.833577856s to libmachine.API.Create "multinode-100000"
	I0806 00:38:47.212022    4292 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:38:47.212029    4292 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:38:47.212038    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.212165    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:38:47.212186    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.212274    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.212359    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.212450    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.212536    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.253675    4292 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:38:47.257359    4292 command_runner.go:130] > NAME=Buildroot
	I0806 00:38:47.257369    4292 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:38:47.257374    4292 command_runner.go:130] > ID=buildroot
	I0806 00:38:47.257380    4292 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:38:47.257386    4292 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:38:47.257598    4292 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:38:47.257609    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:38:47.257715    4292 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:38:47.257899    4292 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:38:47.257909    4292 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:38:47.258116    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:38:47.265892    4292 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:38:47.297110    4292 start.go:296] duration metric: took 85.078237ms for postStartSetup
	I0806 00:38:47.297144    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:38:47.297792    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.297951    4292 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:38:47.298302    4292 start.go:128] duration metric: took 13.951673071s to createHost
	I0806 00:38:47.298316    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.298413    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.298502    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298600    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.298678    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.298783    4292 main.go:141] libmachine: Using SSH client type: native
	I0806 00:38:47.298907    4292 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x111cc0c0] 0x111cee20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:38:47.298914    4292 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:38:47.362043    4292 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722929927.409318196
	
	I0806 00:38:47.362057    4292 fix.go:216] guest clock: 1722929927.409318196
	I0806 00:38:47.362062    4292 fix.go:229] Guest: 2024-08-06 00:38:47.409318196 -0700 PDT Remote: 2024-08-06 00:38:47.29831 -0700 PDT m=+194.654596821 (delta=111.008196ms)
	I0806 00:38:47.362071    4292 fix.go:200] guest clock delta is within tolerance: 111.008196ms
	I0806 00:38:47.362075    4292 start.go:83] releasing machines lock for "multinode-100000-m02", held for 14.015572789s
	I0806 00:38:47.362092    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.362220    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:38:47.382612    4292 out.go:177] * Found network options:
	I0806 00:38:47.403509    4292 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:38:47.425687    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.425738    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426659    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.426958    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:38:47.427090    4292 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:38:47.427141    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:38:47.427187    4292 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:38:47.427313    4292 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:38:47.427341    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:38:47.427407    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427565    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.427581    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:38:47.427794    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.427828    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:38:47.428004    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.428059    4292 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:38:47.428184    4292 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:38:47.463967    4292 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:38:47.464076    4292 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:38:47.464135    4292 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:38:47.515738    4292 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:38:47.516046    4292 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:38:47.516081    4292 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:38:47.516093    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.516195    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.531806    4292 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:38:47.532062    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:38:47.541039    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:38:47.549828    4292 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:38:47.549876    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:38:47.558599    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.567484    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:38:47.576295    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:38:47.585146    4292 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:38:47.594084    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:38:47.603103    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:38:47.612032    4292 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:38:47.620981    4292 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:38:47.628905    4292 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:38:47.629040    4292 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:38:47.637032    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:47.727863    4292 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:38:47.745831    4292 start.go:495] detecting cgroup driver to use...
	I0806 00:38:47.745898    4292 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:38:47.763079    4292 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:38:47.764017    4292 command_runner.go:130] > [Unit]
	I0806 00:38:47.764028    4292 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:38:47.764033    4292 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:38:47.764038    4292 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:38:47.764043    4292 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:38:47.764047    4292 command_runner.go:130] > StartLimitBurst=3
	I0806 00:38:47.764051    4292 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:38:47.764054    4292 command_runner.go:130] > [Service]
	I0806 00:38:47.764058    4292 command_runner.go:130] > Type=notify
	I0806 00:38:47.764062    4292 command_runner.go:130] > Restart=on-failure
	I0806 00:38:47.764066    4292 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:38:47.764072    4292 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:38:47.764084    4292 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:38:47.764091    4292 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:38:47.764099    4292 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:38:47.764105    4292 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:38:47.764111    4292 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:38:47.764118    4292 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:38:47.764125    4292 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:38:47.764132    4292 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:38:47.764135    4292 command_runner.go:130] > ExecStart=
	I0806 00:38:47.764154    4292 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:38:47.764161    4292 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:38:47.764170    4292 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:38:47.764178    4292 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:38:47.764185    4292 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:38:47.764190    4292 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:38:47.764193    4292 command_runner.go:130] > LimitCORE=infinity
	I0806 00:38:47.764198    4292 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:38:47.764203    4292 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:38:47.764207    4292 command_runner.go:130] > TasksMax=infinity
	I0806 00:38:47.764211    4292 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:38:47.764221    4292 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:38:47.764225    4292 command_runner.go:130] > Delegate=yes
	I0806 00:38:47.764229    4292 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:38:47.764248    4292 command_runner.go:130] > KillMode=process
	I0806 00:38:47.764252    4292 command_runner.go:130] > [Install]
	I0806 00:38:47.764256    4292 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:38:47.765971    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.779284    4292 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:38:47.799617    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:38:47.811733    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.822897    4292 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:38:47.842546    4292 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:38:47.852923    4292 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:38:47.867417    4292 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:38:47.867762    4292 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:38:47.870482    4292 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:38:47.870656    4292 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:38:47.877934    4292 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:38:47.891287    4292 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:38:47.996736    4292 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:38:48.093921    4292 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:38:48.093947    4292 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:38:48.107654    4292 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:38:48.205348    4292 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:39:49.225463    4292 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:39:49.225479    4292 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:39:49.225576    4292 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.019011706s)
	I0806 00:39:49.225635    4292 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:39:49.235342    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.235356    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	I0806 00:39:49.235366    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:39:49.235376    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	I0806 00:39:49.235386    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:39:49.235397    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:39:49.235412    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:39:49.235422    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:39:49.235431    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235443    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235454    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235473    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235483    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235494    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235504    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235516    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235526    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235542    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235552    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:39:49.235575    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:39:49.235585    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:39:49.235594    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:39:49.235602    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:39:49.235611    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:39:49.235620    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:39:49.235632    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:39:49.235641    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:39:49.235650    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:39:49.235663    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:39:49.235672    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235681    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:39:49.235690    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:39:49.235699    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:39:49.235708    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235730    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235739    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235748    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235765    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235774    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235870    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235884    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:39:49.235895    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235905    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235913    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235922    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235931    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235940    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235948    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235957    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235966    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235975    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235983    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.235992    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236001    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236009    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:39:49.236018    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236026    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236035    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236044    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236055    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:39:49.236065    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:39:49.236165    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:39:49.236179    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:39:49.236192    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:39:49.236200    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:39:49.236208    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:39:49.236217    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:39:49.236224    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:39:49.236232    4292 command_runner.go:130] > Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	I0806 00:39:49.236240    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:39:49.236247    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	I0806 00:39:49.236266    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:39:49.236275    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	I0806 00:39:49.236287    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:39:49.236296    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	I0806 00:39:49.236304    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	I0806 00:39:49.236312    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	I0806 00:39:49.236320    4292 command_runner.go:130] > Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:39:49.236327    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	I0806 00:39:49.236336    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:39:49.236346    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	I0806 00:39:49.236355    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:39:49.236364    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:39:49.236371    4292 command_runner.go:130] > Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:39:49.236376    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:39:49.236404    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:39:49.236411    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:39:49.236417    4292 command_runner.go:130] > Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	I0806 00:39:49.236427    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:39:49.236434    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:39:49.236440    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:39:49.236446    4292 command_runner.go:130] > Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:39:49.260697    4292 out.go:177] 
	W0806 00:39:49.281618    4292 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:38:46 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.029974914Z" level=info msg="Starting up"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030437769Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:38:46 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:46.030979400Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=517
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.047036729Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064397167Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064452673Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064502313Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064513542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064584182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064595120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064727739Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064762709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064774342Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064782161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.064887916Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.065042581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066836201Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.066879570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067028916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067064324Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067179567Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.067249087Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069585528Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069659860Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069674694Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069684754Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069696901Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.069776277Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070041788Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070145442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070181841Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070193788Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070209053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070220561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070229053Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070237872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070247145Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070258808Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070271932Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070282113Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070295317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070333749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070369063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070379382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070387399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070395816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070403669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070414456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070430669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070442977Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070451302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070459477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070468439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070478113Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070497412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070508384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070518009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070547883Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070582373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070592270Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070600495Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070607217Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070615273Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070622931Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070750538Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070809085Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070954500Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:38:46 multinode-100000-m02 dockerd[517]: time="2024-08-06T07:38:46.070997549Z" level=info msg="containerd successfully booted in 0.024512s"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.050791909Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.057142082Z" level=info msg="Loading containers: start."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.142415375Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.222958623Z" level=info msg="Loading containers: done."
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231011060Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.231179810Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256766502Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:38:47 multinode-100000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:38:47 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:47.256921161Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.264611587Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265650519Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265852818Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265902413Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:38:48 multinode-100000-m02 dockerd[510]: time="2024-08-06T07:38:48.265913447Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:38:48 multinode-100000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:38:49 multinode-100000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:38:49 multinode-100000-m02 dockerd[911]: time="2024-08-06T07:38:49.299585024Z" level=info msg="Starting up"
	Aug 06 07:39:49 multinode-100000-m02 dockerd[911]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:39:49 multinode-100000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:39:49.281745    4292 out.go:239] * 
	W0806 00:39:49.282923    4292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:39:49.343567    4292 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.120405532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122053171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122124908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.122262728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.123348677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5fae897eca5b0180afaec9950c31ab8fe6410f45ea64033ab2505d448d0abc87/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:38:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ea5bc31c54836987e38373933c6df0383027c87ef8cff7c9e1da5b24b5cabe9c/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.260884497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261094181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.261344995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.270291928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310563342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310630330Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310652817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:38:31 multinode-100000 dockerd[1226]: time="2024-08-06T07:38:31.310750128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415212392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415272093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415281683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:53.415427967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:53 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/730773bd53054521739eb2bf3731e90f06df86c05a2f2435964943abea426db3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:39:54 multinode-100000 cri-dockerd[1120]: time="2024-08-06T07:39:54Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619309751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619368219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619377598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:39:54 multinode-100000 dockerd[1226]: time="2024-08-06T07:39:54.619772649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   14 minutes ago      Running             busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	47e0c0c6895ef       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   5fae897eca5b0       storage-provisioner
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              16 minutes ago      Running             kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         16 minutes ago      Running             kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         16 minutes ago      Running             kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         16 minutes ago      Running             etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         16 minutes ago      Running             kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         16 minutes ago      Running             kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [4a58bc5cb9c3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	[INFO] 10.244.0.3:47333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000847s
	[INFO] 10.244.0.3:41915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057433s
	[INFO] 10.244.0.3:34860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071303s
	[INFO] 10.244.0.3:46952 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000111703s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:54:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:50:14 +0000   Tue, 06 Aug 2024 07:38:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 10d8fd2a8ab04e6a90b6dfc076d9ae86
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    dbebf245-a006-4d46-bf5f-51c5f84b672f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                15m                kubelet          Node multinode-100000 status is now: NodeReady
	
	
	Name:               multinode-100000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T00_53_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:53:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:54:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:53:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:53:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:53:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:53:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-100000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 405631c47c9b4602b8ca253c774af06d
	  System UUID:                83a944ea-0000-0000-930f-df1a6331c821
	  Boot ID:                    bd2884b6-d728-45cc-b651-febbafe6f6e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bfsf8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                 kindnet-dn72w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m19s
	  kube-system                 kube-proxy-d9c42           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 2m12s                  kube-proxy  
	  Normal  Starting                 71s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  2m19s (x2 over 2m19s)  kubelet     Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s (x2 over 2m19s)  kubelet     Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m19s (x2 over 2m19s)  kubelet     Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m19s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                116s                   kubelet     Node multinode-100000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  74s (x2 over 74s)      kubelet     Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    74s (x2 over 74s)      kubelet     Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     74s (x2 over 74s)      kubelet     Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  74s                    kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                59s                    kubelet     Node multinode-100000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.230733] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.851509] systemd-fstab-generator[493]: Ignoring "noauto" option for root device
	[  +0.100234] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.793153] systemd-fstab-generator[763]: Ignoring "noauto" option for root device
	[  +0.258718] systemd-fstab-generator[802]: Ignoring "noauto" option for root device
	[  +0.053606] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.051277] systemd-fstab-generator[814]: Ignoring "noauto" option for root device
	[  +0.111209] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[Aug 6 07:37] systemd-fstab-generator[1073]: Ignoring "noauto" option for root device
	[  +0.053283] kauditd_printk_skb: 92 callbacks suppressed
	[  +0.042150] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +0.103517] systemd-fstab-generator[1097]: Ignoring "noauto" option for root device
	[  +0.125760] systemd-fstab-generator[1112]: Ignoring "noauto" option for root device
	[  +3.585995] systemd-fstab-generator[1212]: Ignoring "noauto" option for root device
	[  +2.213789] kauditd_printk_skb: 100 callbacks suppressed
	[  +0.337931] systemd-fstab-generator[1463]: Ignoring "noauto" option for root device
	[  +3.523944] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.294549] kauditd_printk_skb: 100 callbacks suppressed
	[  +3.741886] systemd-fstab-generator[2044]: Ignoring "noauto" option for root device
	[Aug 6 07:38] systemd-fstab-generator[2255]: Ignoring "noauto" option for root device
	[  +0.124943] kauditd_printk_skb: 32 callbacks suppressed
	[ +16.004460] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 6 07:39] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:57.149401Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149465Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-06T07:37:57.149631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.14964Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149646Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.149652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:37:57.152418Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.153493Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:37:57.153528Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T07:52:10.269202Z","caller":"traceutil/trace.go:171","msg":"trace[808197773] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"104.082235ms","start":"2024-08-06T07:52:10.165072Z","end":"2024-08-06T07:52:10.269154Z","steps":["trace[808197773] 'process raft request'  (duration: 103.999362ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:52:57.222789Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926}
	{"level":"info","ts":"2024-08-06T07:52:57.224031Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":926,"took":"926.569µs","hash":3882059122,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-06T07:52:57.224093Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3882059122,"revision":926,"compact-revision":686}
	
	
	==> kernel <==
	 07:54:26 up 18 min,  0 users,  load average: 0.14, 0.14, 0.08
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:53:19.609584       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0806 07:53:29.608464       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:53:29.608536       1 main.go:299] handling current node
	I0806 07:53:29.608554       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:53:29.608564       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:39.613717       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:53:39.613773       1 main.go:299] handling current node
	I0806 07:53:39.613786       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:53:39.613815       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:49.608376       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:53:49.608547       1 main.go:299] handling current node
	I0806 07:53:49.608588       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:53:49.608686       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:59.615606       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:53:59.615675       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:59.615977       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:53:59.616007       1 main.go:299] handling current node
	I0806 07:54:09.616410       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:54:09.616683       1 main.go:299] handling current node
	I0806 07:54:09.616787       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:54:09.616908       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:54:19.608266       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:54:19.608620       1 main.go:299] handling current node
	I0806 07:54:19.608938       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:54:19.609314       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6d93185f30a9] <==
	E0806 07:37:58.467821       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0806 07:37:58.475966       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.532827       1 controller.go:615] quota admission added evaluator for: namespaces
	E0806 07:37:58.541093       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0806 07:37:58.672921       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:37:59.326856       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0806 07:37:59.329555       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0806 07:37:59.329585       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:37:59.607795       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:37:59.629707       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:37:59.743716       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0806 07:37:59.749420       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0806 07:37:59.751068       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:37:59.755409       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0806 07:38:00.364128       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:38:00.587524       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:38:00.593919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0806 07:38:00.599813       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:38:14.702592       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0806 07:38:14.795881       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0806 07:51:40.593542       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52513: use of closed network connection
	E0806 07:51:40.913864       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52518: use of closed network connection
	E0806 07:51:41.219815       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52523: use of closed network connection
	E0806 07:51:44.319914       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52554: use of closed network connection
	E0806 07:51:44.505332       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52556: use of closed network connection
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	I0806 07:52:07.325935       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:52:07.342865       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:09.851060       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-100000-m03"
	I0806 07:52:30.373055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:52:30.382873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.276µs"
	I0806 07:52:30.391038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.602µs"
	I0806 07:52:32.408559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.578386ms"
	I0806 07:52:32.408616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.014µs"
	I0806 07:53:09.171154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.139086ms"
	I0806 07:53:09.175196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.978136ms"
	I0806 07:53:09.175804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.257µs"
	I0806 07:53:13.398407       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:53:13.404870       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.2.0/24"]
	I0806 07:53:15.293136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.929µs"
	I0806 07:53:28.554492       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:53:28.566261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.516µs"
	I0806 07:53:38.331842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.02µs"
	I0806 07:53:38.334824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.081µs"
	I0806 07:53:38.341838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.995µs"
	I0806 07:53:38.477263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.291µs"
	I0806 07:53:38.479196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.295µs"
	I0806 07:53:39.495459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.323598ms"
	I0806 07:53:39.495743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.517µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	W0806 07:37:58.445840       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0806 07:37:58.445932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0806 07:37:58.446107       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:50:00 multinode-100000 kubelet[2051]: E0806 07:50:00.481450    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:50:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:51:00 multinode-100000 kubelet[2051]: E0806 07:51:00.483720    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:51:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:52:00 multinode-100000 kubelet[2051]: E0806 07:52:00.481620    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:52:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:53:00 multinode-100000 kubelet[2051]: E0806 07:53:00.486109    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:53:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:53:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:53:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:53:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:54:00 multinode-100000 kubelet[2051]: E0806 07:54:00.482934    2051 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:54:00 multinode-100000 kubelet[2051]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:54:00 multinode-100000 kubelet[2051]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:54:00 multinode-100000 kubelet[2051]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:54:00 multinode-100000 kubelet[2051]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (98.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (292.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-100000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-100000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-100000: (24.844244825s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr
E0806 00:55:44.466431    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:57:41.415565    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:58:22.354252    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr: exit status 90 (4m23.353395353s)

                                                
                                                
-- stdout --
	* [multinode-100000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	* Restarting existing hyperkit VM for "multinode-100000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	* Restarting existing hyperkit VM for "multinode-100000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.13
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	  - env NO_PROXY=192.169.0.13
	* Verifying Kubernetes components...
	
	* Starting "multinode-100000-m03" worker node in "multinode-100000" cluster
	* Restarting existing hyperkit VM for "multinode-100000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.13,192.169.0.14
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:54:52.775291    5434 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:54:52.775561    5434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:52.775566    5434 out.go:304] Setting ErrFile to fd 2...
	I0806 00:54:52.775570    5434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:52.775723    5434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:54:52.777331    5434 out.go:298] Setting JSON to false
	I0806 00:54:52.799866    5434 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3254,"bootTime":1722927638,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:54:52.799957    5434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:54:52.822010    5434 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:54:52.864712    5434 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:54:52.864770    5434 notify.go:220] Checking for updates...
	I0806 00:54:52.907409    5434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:54:52.928567    5434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:54:52.949610    5434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:54:52.970563    5434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:54:52.991585    5434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:54:53.013277    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:54:53.013490    5434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:54:53.014138    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:53.014217    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:53.023954    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53066
	I0806 00:54:53.024306    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:53.024759    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:54:53.024773    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:53.025048    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:53.025203    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:53.053365    5434 out.go:177] * Using the hyperkit driver based on existing profile
	I0806 00:54:53.074587    5434 start.go:297] selected driver: hyperkit
	I0806 00:54:53.074644    5434 start.go:901] validating driver "hyperkit" against &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:54:53.074889    5434 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:54:53.075080    5434 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:54:53.075282    5434 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:54:53.084939    5434 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:54:53.088779    5434 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:53.088814    5434 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:54:53.091507    5434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:54:53.091564    5434 cni.go:84] Creating CNI manager for ""
	I0806 00:54:53.091573    5434 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:54:53.091658    5434 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:54:53.091766    5434 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:54:53.133253    5434 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:54:53.154509    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:54:53.154586    5434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:54:53.154619    5434 cache.go:56] Caching tarball of preloaded images
	I0806 00:54:53.154820    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:54:53.154837    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:54:53.155029    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:54:53.155979    5434 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:54:53.156123    5434 start.go:364] duration metric: took 115.218µs to acquireMachinesLock for "multinode-100000"
	I0806 00:54:53.156179    5434 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:54:53.156190    5434 fix.go:54] fixHost starting: 
	I0806 00:54:53.156488    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:53.156518    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:53.165726    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53068
	I0806 00:54:53.166104    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:53.166447    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:54:53.166459    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:53.166680    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:53.166799    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:53.166912    5434 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:54:53.167000    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:53.167075    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:54:53.167993    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid 4303 missing from process table
	I0806 00:54:53.168044    5434 fix.go:112] recreateIfNeeded on multinode-100000: state=Stopped err=<nil>
	I0806 00:54:53.168068    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	W0806 00:54:53.168161    5434 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:54:53.210553    5434 out.go:177] * Restarting existing hyperkit VM for "multinode-100000" ...
	I0806 00:54:53.233510    5434 main.go:141] libmachine: (multinode-100000) Calling .Start
	I0806 00:54:53.233779    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:53.233830    5434 main.go:141] libmachine: (multinode-100000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:54:53.235587    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid 4303 missing from process table
	I0806 00:54:53.235601    5434 main.go:141] libmachine: (multinode-100000) DBG | pid 4303 is in state "Stopped"
	I0806 00:54:53.235624    5434 main.go:141] libmachine: (multinode-100000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid...
	I0806 00:54:53.235833    5434 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:54:53.349771    5434 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:54:53.349804    5434 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:54:53.349923    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b87e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:54:53.349949    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b87e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:54:53.350000    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:54:53.350046    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:54:53.350064    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:54:53.351421    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Pid is 5446
	I0806 00:54:53.351799    5434 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:54:53.351809    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:53.351891    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:54:53.353820    5434 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:54:53.353926    5434 main.go:141] libmachine: (multinode-100000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0806 00:54:53.353945    5434 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b327da}
	I0806 00:54:53.353958    5434 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:54:53.353969    5434 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:54:53.353976    5434 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:54:53.353983    5434 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:54:53.354064    5434 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:54:53.354774    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:54:53.355023    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:54:53.355524    5434 machine.go:94] provisionDockerMachine start ...
	I0806 00:54:53.355536    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:53.355691    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:54:53.355814    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:54:53.355925    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:54:53.356036    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:54:53.356154    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:54:53.356323    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:54:53.356521    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:54:53.356533    5434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:54:53.359935    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:54:53.411612    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:54:53.412320    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:54:53.412339    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:54:53.412346    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:54:53.412355    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:54:53.793354    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:54:53.793370    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:54:53.907960    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:54:53.907981    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:54:53.907996    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:54:53.908005    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:54:53.908869    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:54:53.908882    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:54:59.470791    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:54:59.470906    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:54:59.470916    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:54:59.495324    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:55:04.433190    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:55:04.433204    5434 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:55:04.433414    5434 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:55:04.433426    5434 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:55:04.433525    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.433619    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.433715    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.433824    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.433936    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.434099    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.434280    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.434302    5434 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:55:04.510650    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:55:04.510671    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.510814    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.510917    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.511009    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.511103    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.511218    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.511376    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.511388    5434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:55:04.581815    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:55:04.581856    5434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:55:04.581885    5434 buildroot.go:174] setting up certificates
	I0806 00:55:04.581892    5434 provision.go:84] configureAuth start
	I0806 00:55:04.581900    5434 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:55:04.582032    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:55:04.582112    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.582198    5434 provision.go:143] copyHostCerts
	I0806 00:55:04.582227    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:04.582303    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:55:04.582311    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:04.582460    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:55:04.582669    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:04.582710    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:55:04.582715    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:04.582803    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:55:04.582953    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:04.582994    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:55:04.582999    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:04.583086    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:55:04.583248    5434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:55:04.712424    5434 provision.go:177] copyRemoteCerts
	I0806 00:55:04.712483    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:55:04.712499    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.712641    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.712739    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.712831    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.712916    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:04.750794    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:55:04.750868    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:55:04.771056    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:55:04.771110    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:55:04.790705    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:55:04.790769    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:55:04.810426    5434 provision.go:87] duration metric: took 228.51549ms to configureAuth
	I0806 00:55:04.810439    5434 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:55:04.810605    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:04.810620    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:04.810754    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.810848    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.810933    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.811014    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.811089    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.811201    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.811331    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.811339    5434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:55:04.876926    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:55:04.876938    5434 buildroot.go:70] root file system type: tmpfs
	I0806 00:55:04.877025    5434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:55:04.877040    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.877182    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.877280    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.877378    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.877466    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.877597    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.877740    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.877784    5434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:55:04.953206    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:55:04.953225    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.953377    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.953483    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.953589    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.953690    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.953819    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.953957    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.953970    5434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:55:06.623296    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:55:06.623311    5434 machine.go:97] duration metric: took 13.267517182s to provisionDockerMachine
	I0806 00:55:06.623323    5434 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:55:06.623330    5434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:55:06.623347    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.623540    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:55:06.623553    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.623643    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.623737    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.623841    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.623952    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:06.668104    5434 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:55:06.671469    5434 command_runner.go:130] > NAME=Buildroot
	I0806 00:55:06.671477    5434 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:55:06.671481    5434 command_runner.go:130] > ID=buildroot
	I0806 00:55:06.671485    5434 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:55:06.671488    5434 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:55:06.671619    5434 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:55:06.671630    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:55:06.671730    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:55:06.671922    5434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:55:06.671928    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:55:06.672134    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:55:06.682041    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:06.712670    5434 start.go:296] duration metric: took 89.337079ms for postStartSetup
	I0806 00:55:06.712696    5434 fix.go:56] duration metric: took 13.556242885s for fixHost
	I0806 00:55:06.712709    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.712842    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.712939    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.713031    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.713121    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.713260    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:06.713404    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:06.713411    5434 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:55:06.779050    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930906.844084403
	
	I0806 00:55:06.779062    5434 fix.go:216] guest clock: 1722930906.844084403
	I0806 00:55:06.779068    5434 fix.go:229] Guest: 2024-08-06 00:55:06.844084403 -0700 PDT Remote: 2024-08-06 00:55:06.712699 -0700 PDT m=+13.974282859 (delta=131.385403ms)
	I0806 00:55:06.779083    5434 fix.go:200] guest clock delta is within tolerance: 131.385403ms
	I0806 00:55:06.779088    5434 start.go:83] releasing machines lock for "multinode-100000", held for 13.622685085s
	I0806 00:55:06.779108    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779243    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:55:06.779354    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779683    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779782    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779886    5434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:55:06.779913    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.779957    5434 ssh_runner.go:195] Run: cat /version.json
	I0806 00:55:06.779977    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.780040    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.780076    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.780159    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.780196    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.780314    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.780331    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.780402    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:06.780430    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:06.862442    5434 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:55:06.862500    5434 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:55:06.862677    5434 ssh_runner.go:195] Run: systemctl --version
	I0806 00:55:06.867604    5434 command_runner.go:130] > systemd 252 (252)
	I0806 00:55:06.867628    5434 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:55:06.867839    5434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:55:06.872017    5434 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:55:06.872077    5434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:55:06.872121    5434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:55:06.885766    5434 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:55:06.885848    5434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:55:06.885861    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:06.885952    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:06.900629    5434 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:55:06.900887    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:55:06.909937    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:55:06.918880    5434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:55:06.918922    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:55:06.927993    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:06.936831    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:55:06.945909    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:06.954813    5434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:55:06.963998    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:55:06.972888    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:55:06.981863    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:55:06.990782    5434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:55:06.998891    5434 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:55:06.999023    5434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:55:07.008442    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:07.111172    5434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:55:07.129602    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:07.129681    5434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:55:07.146741    5434 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:55:07.147296    5434 command_runner.go:130] > [Unit]
	I0806 00:55:07.147306    5434 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:55:07.147311    5434 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:55:07.147316    5434 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:55:07.147321    5434 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:55:07.147341    5434 command_runner.go:130] > StartLimitBurst=3
	I0806 00:55:07.147347    5434 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:55:07.147351    5434 command_runner.go:130] > [Service]
	I0806 00:55:07.147354    5434 command_runner.go:130] > Type=notify
	I0806 00:55:07.147358    5434 command_runner.go:130] > Restart=on-failure
	I0806 00:55:07.147363    5434 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:55:07.147370    5434 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:55:07.147376    5434 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:55:07.147382    5434 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:55:07.147388    5434 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:55:07.147392    5434 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:55:07.147398    5434 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:55:07.147414    5434 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:55:07.147421    5434 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:55:07.147428    5434 command_runner.go:130] > ExecStart=
	I0806 00:55:07.147440    5434 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:55:07.147445    5434 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:55:07.147452    5434 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:55:07.147458    5434 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:55:07.147462    5434 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:55:07.147466    5434 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:55:07.147478    5434 command_runner.go:130] > LimitCORE=infinity
	I0806 00:55:07.147483    5434 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:55:07.147488    5434 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:55:07.147493    5434 command_runner.go:130] > TasksMax=infinity
	I0806 00:55:07.147498    5434 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:55:07.147510    5434 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:55:07.147518    5434 command_runner.go:130] > Delegate=yes
	I0806 00:55:07.147526    5434 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:55:07.147536    5434 command_runner.go:130] > KillMode=process
	I0806 00:55:07.147540    5434 command_runner.go:130] > [Install]
	I0806 00:55:07.147551    5434 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:55:07.147629    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:07.159343    5434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:55:07.174076    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:07.185284    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:07.196345    5434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:55:07.220943    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:07.232200    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:07.246532    5434 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:55:07.246763    5434 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:55:07.249395    5434 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:55:07.249601    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:55:07.256709    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:55:07.270264    5434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:55:07.373249    5434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:55:07.470581    5434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:55:07.470656    5434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:55:07.484033    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:07.585356    5434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:55:09.915028    5434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329607446s)
	I0806 00:55:09.915085    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:55:09.926762    5434 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:55:09.941366    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:09.953827    5434 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:55:10.050234    5434 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:55:10.161226    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:10.271189    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:55:10.284569    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:10.295807    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:10.407189    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:55:10.463062    5434 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:55:10.463140    5434 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:55:10.467071    5434 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:55:10.467082    5434 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:55:10.467086    5434 command_runner.go:130] > Device: 0,22	Inode: 753         Links: 1
	I0806 00:55:10.467091    5434 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:55:10.467096    5434 command_runner.go:130] > Access: 2024-08-06 07:55:10.485303147 +0000
	I0806 00:55:10.467101    5434 command_runner.go:130] > Modify: 2024-08-06 07:55:10.485303147 +0000
	I0806 00:55:10.467106    5434 command_runner.go:130] > Change: 2024-08-06 07:55:10.486303006 +0000
	I0806 00:55:10.467111    5434 command_runner.go:130] >  Birth: -
	I0806 00:55:10.467303    5434 start.go:563] Will wait 60s for crictl version
	I0806 00:55:10.467344    5434 ssh_runner.go:195] Run: which crictl
	I0806 00:55:10.470189    5434 command_runner.go:130] > /usr/bin/crictl
	I0806 00:55:10.470513    5434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:55:10.499752    5434 command_runner.go:130] > Version:  0.1.0
	I0806 00:55:10.499767    5434 command_runner.go:130] > RuntimeName:  docker
	I0806 00:55:10.499770    5434 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:55:10.499774    5434 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:55:10.500795    5434 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:55:10.500863    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:10.517201    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:10.518128    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:10.535554    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:10.579645    5434 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:55:10.579691    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:55:10.580056    5434 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:55:10.584485    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:10.594933    5434 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:55:10.595035    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:55:10.595090    5434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:55:10.607660    5434 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0806 00:55:10.607674    5434 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:55:10.607678    5434 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:55:10.607682    5434 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:55:10.607686    5434 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:55:10.607690    5434 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:55:10.607694    5434 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:55:10.607711    5434 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:55:10.607716    5434 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:55:10.607720    5434 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0806 00:55:10.609002    5434 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0806 00:55:10.609015    5434 docker.go:615] Images already preloaded, skipping extraction
	I0806 00:55:10.609085    5434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:55:10.620324    5434 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0806 00:55:10.620345    5434 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:55:10.620349    5434 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:55:10.620354    5434 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:55:10.620358    5434 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:55:10.620362    5434 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:55:10.620366    5434 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:55:10.620370    5434 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:55:10.620375    5434 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:55:10.620379    5434 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0806 00:55:10.620837    5434 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0806 00:55:10.620857    5434 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:55:10.620870    5434 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:55:10.620947    5434 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:55:10.621028    5434 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:55:10.656828    5434 command_runner.go:130] > cgroupfs
	I0806 00:55:10.657651    5434 cni.go:84] Creating CNI manager for ""
	I0806 00:55:10.657667    5434 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:55:10.657680    5434 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:55:10.657699    5434 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:55:10.657785    5434 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:55:10.657836    5434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:55:10.666311    5434 command_runner.go:130] > kubeadm
	I0806 00:55:10.666320    5434 command_runner.go:130] > kubectl
	I0806 00:55:10.666324    5434 command_runner.go:130] > kubelet
	I0806 00:55:10.666336    5434 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:55:10.666376    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:55:10.674599    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:55:10.688184    5434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:55:10.701466    5434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:55:10.715212    5434 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:55:10.717953    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:10.727893    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:10.820115    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:55:10.832903    5434 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:55:10.832915    5434 certs.go:194] generating shared ca certs ...
	I0806 00:55:10.832929    5434 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:10.833128    5434 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:55:10.833206    5434 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:55:10.833216    5434 certs.go:256] generating profile certs ...
	I0806 00:55:10.833328    5434 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:55:10.833415    5434 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:55:10.833485    5434 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:55:10.833492    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:55:10.833513    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:55:10.833532    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:55:10.833551    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:55:10.833568    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:55:10.833598    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:55:10.833629    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:55:10.833648    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:55:10.833756    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:55:10.833801    5434 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:55:10.833808    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:55:10.833839    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:55:10.833872    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:55:10.833906    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:55:10.833974    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:10.834010    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:55:10.834032    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:10.834049    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:55:10.834498    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:55:10.864424    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:55:10.891260    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:55:10.914747    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:55:10.943675    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:55:10.965018    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:55:10.984529    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:55:11.003871    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:55:11.023031    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:55:11.042125    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:55:11.061390    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:55:11.080969    5434 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:55:11.094345    5434 ssh_runner.go:195] Run: openssl version
	I0806 00:55:11.098214    5434 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:55:11.098412    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:55:11.107460    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.110604    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.110750    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.110787    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.114716    5434 command_runner.go:130] > b5213941
	I0806 00:55:11.114958    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:55:11.123886    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:55:11.132800    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.135908    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.135951    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.135985    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.139937    5434 command_runner.go:130] > 51391683
	I0806 00:55:11.140149    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:55:11.149071    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:55:11.157866    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.161027    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.161128    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.161162    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.165060    5434 command_runner.go:130] > 3ec20f2e
	I0806 00:55:11.165263    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:55:11.174094    5434 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:55:11.177167    5434 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:55:11.177181    5434 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0806 00:55:11.177187    5434 command_runner.go:130] > Device: 253,1	Inode: 531528      Links: 1
	I0806 00:55:11.177192    5434 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:55:11.177197    5434 command_runner.go:130] > Access: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177201    5434 command_runner.go:130] > Modify: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177207    5434 command_runner.go:130] > Change: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177212    5434 command_runner.go:130] >  Birth: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177350    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:55:11.181436    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.181604    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:55:11.185540    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.185693    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:55:11.189793    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.189985    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:55:11.193916    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.194116    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:55:11.198028    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.198231    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:55:11.202137    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.202319    5434 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:11.202443    5434 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:55:11.215188    5434 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:55:11.223263    5434 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0806 00:55:11.223276    5434 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0806 00:55:11.223283    5434 command_runner.go:130] > /var/lib/minikube/etcd:
	I0806 00:55:11.223302    5434 command_runner.go:130] > member
	I0806 00:55:11.223406    5434 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 00:55:11.223415    5434 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 00:55:11.223453    5434 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 00:55:11.231409    5434 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:55:11.231732    5434 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-100000" does not appear in /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:11.231826    5434 kubeconfig.go:62] /Users/jenkins/minikube-integration/19370-944/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-100000" cluster setting kubeconfig missing "multinode-100000" context setting]
	I0806 00:55:11.232022    5434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:11.232670    5434 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:11.232876    5434 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1231e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:55:11.233199    5434 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:55:11.233368    5434 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 00:55:11.241354    5434 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0806 00:55:11.241372    5434 kubeadm.go:1160] stopping kube-system containers ...
	I0806 00:55:11.241430    5434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:55:11.255536    5434 command_runner.go:130] > 4a58bc5cb9c3
	I0806 00:55:11.255549    5434 command_runner.go:130] > 47e0c0c6895e
	I0806 00:55:11.255553    5434 command_runner.go:130] > 5fae897eca5b
	I0806 00:55:11.255555    5434 command_runner.go:130] > ea5bc31c5483
	I0806 00:55:11.255559    5434 command_runner.go:130] > ca21c7b20c75
	I0806 00:55:11.255562    5434 command_runner.go:130] > 10a202844745
	I0806 00:55:11.255566    5434 command_runner.go:130] > 6bbb2ed0b308
	I0806 00:55:11.255570    5434 command_runner.go:130] > 731b397a827b
	I0806 00:55:11.255573    5434 command_runner.go:130] > 09c41cba0052
	I0806 00:55:11.255576    5434 command_runner.go:130] > b60a8dd0efa5
	I0806 00:55:11.255580    5434 command_runner.go:130] > 6d93185f30a9
	I0806 00:55:11.255600    5434 command_runner.go:130] > e6892e6b325e
	I0806 00:55:11.255605    5434 command_runner.go:130] > d20d569460ea
	I0806 00:55:11.255608    5434 command_runner.go:130] > 8cca7996d392
	I0806 00:55:11.255611    5434 command_runner.go:130] > bde71375b0e4
	I0806 00:55:11.255614    5434 command_runner.go:130] > 94cf07fa5ddc
	I0806 00:55:11.256218    5434 docker.go:483] Stopping containers: [4a58bc5cb9c3 47e0c0c6895e 5fae897eca5b ea5bc31c5483 ca21c7b20c75 10a202844745 6bbb2ed0b308 731b397a827b 09c41cba0052 b60a8dd0efa5 6d93185f30a9 e6892e6b325e d20d569460ea 8cca7996d392 bde71375b0e4 94cf07fa5ddc]
	I0806 00:55:11.256286    5434 ssh_runner.go:195] Run: docker stop 4a58bc5cb9c3 47e0c0c6895e 5fae897eca5b ea5bc31c5483 ca21c7b20c75 10a202844745 6bbb2ed0b308 731b397a827b 09c41cba0052 b60a8dd0efa5 6d93185f30a9 e6892e6b325e d20d569460ea 8cca7996d392 bde71375b0e4 94cf07fa5ddc
	I0806 00:55:11.268129    5434 command_runner.go:130] > 4a58bc5cb9c3
	I0806 00:55:11.268511    5434 command_runner.go:130] > 47e0c0c6895e
	I0806 00:55:11.268518    5434 command_runner.go:130] > 5fae897eca5b
	I0806 00:55:11.269754    5434 command_runner.go:130] > ea5bc31c5483
	I0806 00:55:11.269760    5434 command_runner.go:130] > ca21c7b20c75
	I0806 00:55:11.269763    5434 command_runner.go:130] > 10a202844745
	I0806 00:55:11.269767    5434 command_runner.go:130] > 6bbb2ed0b308
	I0806 00:55:11.269780    5434 command_runner.go:130] > 731b397a827b
	I0806 00:55:11.269785    5434 command_runner.go:130] > 09c41cba0052
	I0806 00:55:11.269788    5434 command_runner.go:130] > b60a8dd0efa5
	I0806 00:55:11.270315    5434 command_runner.go:130] > 6d93185f30a9
	I0806 00:55:11.270323    5434 command_runner.go:130] > e6892e6b325e
	I0806 00:55:11.270532    5434 command_runner.go:130] > d20d569460ea
	I0806 00:55:11.270538    5434 command_runner.go:130] > 8cca7996d392
	I0806 00:55:11.270541    5434 command_runner.go:130] > bde71375b0e4
	I0806 00:55:11.270544    5434 command_runner.go:130] > 94cf07fa5ddc
	I0806 00:55:11.271328    5434 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 00:55:11.284221    5434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:55:11.292278    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:55:11.292297    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:55:11.292304    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:55:11.292324    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:55:11.292402    5434 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:55:11.292413    5434 kubeadm.go:157] found existing configuration files:
	
	I0806 00:55:11.292449    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:55:11.300196    5434 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:55:11.300213    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:55:11.300249    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:55:11.308035    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:55:11.315574    5434 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:55:11.315591    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:55:11.315627    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:55:11.323528    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:55:11.330930    5434 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:55:11.330949    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:55:11.330983    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:55:11.338702    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:55:11.346009    5434 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:55:11.346164    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:55:11.346198    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:55:11.354219    5434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:55:11.362075    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:11.434757    5434 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:55:11.434770    5434 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:55:11.434775    5434 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:55:11.434780    5434 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 00:55:11.434789    5434 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0806 00:55:11.434795    5434 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0806 00:55:11.434800    5434 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0806 00:55:11.434806    5434 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0806 00:55:11.434813    5434 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0806 00:55:11.434823    5434 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 00:55:11.434829    5434 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 00:55:11.434833    5434 command_runner.go:130] > [certs] Using the existing "sa" key
	I0806 00:55:11.434846    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:11.472110    5434 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:55:11.703561    5434 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:55:11.896147    5434 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:55:12.067020    5434 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:55:12.205169    5434 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:55:12.503640    5434 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:55:12.505818    5434 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.070936358s)
	I0806 00:55:12.505831    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:12.559506    5434 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:55:12.559522    5434 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:55:12.559526    5434 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:55:12.662923    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:12.717182    5434 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:55:12.717196    5434 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:55:12.718956    5434 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:55:12.719502    5434 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:55:12.721168    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:12.793262    5434 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:55:12.801338    5434 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:55:12.801401    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:13.302705    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:13.801616    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:13.813958    5434 command_runner.go:130] > 1781
	I0806 00:55:13.814003    5434 api_server.go:72] duration metric: took 1.01265181s to wait for apiserver process to appear ...
	I0806 00:55:13.814011    5434 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:55:13.814027    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:16.347202    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 00:55:16.347218    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 00:55:16.347226    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:16.392636    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 00:55:16.392652    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 00:55:16.814908    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:16.825473    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 00:55:16.825491    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 00:55:17.314170    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:17.318884    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 00:55:17.318899    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 00:55:17.814354    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:17.818288    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:55:17.818355    5434 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:55:17.818361    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:17.818368    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:17.818371    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:17.823335    5434 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:55:17.823346    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:17.823351    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:17.823354    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:17.823357    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:17.823359    5434 round_trippers.go:580]     Content-Length: 263
	I0806 00:55:17.823362    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:17 GMT
	I0806 00:55:17.823365    5434 round_trippers.go:580]     Audit-Id: 7135051e-b726-47d5-a200-f2d12032ef14
	I0806 00:55:17.823368    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:17.823389    5434 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:55:17.823431    5434 api_server.go:141] control plane version: v1.30.3
	I0806 00:55:17.823441    5434 api_server.go:131] duration metric: took 4.009346825s to wait for apiserver health ...
	I0806 00:55:17.823448    5434 cni.go:84] Creating CNI manager for ""
	I0806 00:55:17.823451    5434 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:55:17.844296    5434 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:55:17.866393    5434 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:55:17.872058    5434 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:55:17.872069    5434 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:55:17.872100    5434 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:55:17.872129    5434 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:55:17.872137    5434 command_runner.go:130] > Access: 2024-08-06 07:55:03.988856323 +0000
	I0806 00:55:17.872142    5434 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:55:17.872146    5434 command_runner.go:130] > Change: 2024-08-06 07:55:01.454930767 +0000
	I0806 00:55:17.872149    5434 command_runner.go:130] >  Birth: -
	I0806 00:55:17.872222    5434 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:55:17.872231    5434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:55:17.887537    5434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:55:18.233164    5434 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0806 00:55:18.245992    5434 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0806 00:55:18.309665    5434 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0806 00:55:18.352694    5434 command_runner.go:130] > daemonset.apps/kindnet configured
	I0806 00:55:18.354227    5434 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:55:18.354308    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:18.354315    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.354322    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.354326    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.356655    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:18.356668    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.356676    5434 round_trippers.go:580]     Audit-Id: 2991c079-ff2b-41b9-b1df-dd8b701947e3
	I0806 00:55:18.356682    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.356688    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.356692    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.356696    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.356701    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.357370    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1423"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0806 00:55:18.361239    5434 system_pods.go:59] 10 kube-system pods found
	I0806 00:55:18.361259    5434 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 00:55:18.361264    5434 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 00:55:18.361269    5434 system_pods.go:61] "kindnet-dn72w" [34a2c1f4-38da-4e95-8d44-d2eae75e5dcb] Running
	I0806 00:55:18.361285    5434 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:55:18.361295    5434 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 00:55:18.361301    5434 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 00:55:18.361307    5434 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:55:18.361310    5434 system_pods.go:61] "kube-proxy-d9c42" [fe685526-4722-4113-b2b3-9a84182541b7] Running
	I0806 00:55:18.361315    5434 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 00:55:18.361318    5434 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:55:18.361323    5434 system_pods.go:74] duration metric: took 7.088649ms to wait for pod list to return data ...
	I0806 00:55:18.361331    5434 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:55:18.361366    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:55:18.361371    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.361377    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.361382    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.362937    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.362946    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.362951    5434 round_trippers.go:580]     Audit-Id: f2956865-fa14-407b-9a6f-c187433e5c48
	I0806 00:55:18.362956    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.362958    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.362961    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.362963    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.362966    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.363144    5434 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1423"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0806 00:55:18.363584    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:18.363598    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:18.363612    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:18.363617    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:18.363620    5434 node_conditions.go:105] duration metric: took 2.285564ms to run NodePressure ...
	I0806 00:55:18.363630    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:18.465445    5434 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:55:18.619573    5434 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:55:18.620797    5434 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 00:55:18.620897    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0806 00:55:18.620908    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.620916    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.620933    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.622688    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.622703    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.622711    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.622716    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.622721    5434 round_trippers.go:580]     Audit-Id: 4c64a921-516a-4271-826d-6e9af481f0ee
	I0806 00:55:18.622725    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.622739    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.622748    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.623132    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1425"},"items":[{"metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1410","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30917 chars]
	I0806 00:55:18.623869    5434 kubeadm.go:739] kubelet initialised
	I0806 00:55:18.623879    5434 kubeadm.go:740] duration metric: took 3.065796ms waiting for restarted kubelet to initialise ...
	I0806 00:55:18.623886    5434 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:18.623919    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:18.623925    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.623930    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.623934    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.625655    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.625662    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.625667    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.625671    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.625673    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.625675    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.625677    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.625679    5434 round_trippers.go:580]     Audit-Id: 54fe049e-2496-412e-8bf9-6980782498d1
	I0806 00:55:18.626717    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0806 00:55:18.628343    5434 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.628387    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:18.628392    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.628409    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.628415    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.629588    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.629607    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.629617    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.629623    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.629630    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.629643    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.629650    5434 round_trippers.go:580]     Audit-Id: 9ae58c9e-38cc-4d0c-9097-8381a2972b06
	I0806 00:55:18.629653    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.629757    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:18.630007    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.630014    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.630020    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.630024    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.631033    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.631042    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.631049    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.631055    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.631061    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.631067    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.631069    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.631076    5434 round_trippers.go:580]     Audit-Id: 5da8eb8d-907f-423e-9741-1304c63aac04
	I0806 00:55:18.631208    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.631394    5434 pod_ready.go:97] node "multinode-100000" hosting pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.631404    5434 pod_ready.go:81] duration metric: took 3.05173ms for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.631410    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.631417    5434 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.631450    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:55:18.631455    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.631460    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.631464    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.632332    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.632342    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.632356    5434 round_trippers.go:580]     Audit-Id: 54a52952-9d42-450d-8231-0b11106f9607
	I0806 00:55:18.632363    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.632367    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.632369    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.632371    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.632376    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.632599    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1410","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0806 00:55:18.632795    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.632802    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.632808    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.632812    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.633675    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.633681    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.633686    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.633689    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.633691    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.633693    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.633696    5434 round_trippers.go:580]     Audit-Id: ce930a97-7ef4-41b0-861f-6ee9e9ecdedc
	I0806 00:55:18.633700    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.633807    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.633989    5434 pod_ready.go:97] node "multinode-100000" hosting pod "etcd-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.633997    5434 pod_ready.go:81] duration metric: took 2.576204ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.634003    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "etcd-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.634013    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.634047    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:55:18.634051    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.634056    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.634060    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.635009    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.635016    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.635021    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.635029    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.635032    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.635035    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.635039    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.635042    5434 round_trippers.go:580]     Audit-Id: e825c9c7-f114-4531-be9a-248fd14f9459
	I0806 00:55:18.635226    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"1414","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0806 00:55:18.635463    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.635470    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.635476    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.635480    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.636292    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.636298    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.636303    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.636306    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.636309    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.636312    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.636314    5434 round_trippers.go:580]     Audit-Id: 2be5fb5e-74fa-4c4e-949a-fbca588eb68f
	I0806 00:55:18.636317    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.636503    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.636669    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-apiserver-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.636680    5434 pod_ready.go:81] duration metric: took 2.660039ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.636687    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-apiserver-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.636692    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.636726    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:55:18.636731    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.636737    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.636741    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.637642    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.637648    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.637652    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.637655    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.637658    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.637660    5434 round_trippers.go:580]     Audit-Id: edd31c10-32fe-4cc6-a258-36887d0ea7c0
	I0806 00:55:18.637662    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.637665    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.637798    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"1415","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0806 00:55:18.755614    5434 request.go:629] Waited for 117.467135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.755664    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.755674    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.755684    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.755692    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.758404    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:18.758429    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.758437    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.758440    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.758444    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.758447    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.758450    5434 round_trippers.go:580]     Audit-Id: af6f8e30-1dca-44ac-8578-33584ad0edf5
	I0806 00:55:18.758454    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.758542    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.758793    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-controller-manager-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.758808    5434 pod_ready.go:81] duration metric: took 122.106804ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.758816    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-controller-manager-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.758821    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.955181    5434 request.go:629] Waited for 196.311166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:55:18.955337    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:55:18.955348    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.955358    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.955366    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.958400    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:18.958415    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.958425    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.958430    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.958434    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.958440    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:18.958446    5434 round_trippers.go:580]     Audit-Id: fdf94cbc-8c9f-4c29-a9b2-d4cd8da861c7
	I0806 00:55:18.958473    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.958934    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"1421","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0806 00:55:19.156544    5434 request.go:629] Waited for 197.171014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.156603    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.156614    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.156625    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.156633    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.160037    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:19.160055    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.160062    5434 round_trippers.go:580]     Audit-Id: 82e01bb1-a559-4c93-bc5a-36ad03799626
	I0806 00:55:19.160067    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.160072    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.160076    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.160080    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.160083    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.160403    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:19.160658    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-proxy-crsrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.160675    5434 pod_ready.go:81] duration metric: took 401.836129ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:19.160684    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-proxy-crsrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.160691    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:19.355243    5434 request.go:629] Waited for 194.498904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:19.355294    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:19.355303    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.355315    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.355391    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.358093    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.358107    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.358114    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.358119    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.358122    5434 round_trippers.go:580]     Audit-Id: cfff1d7b-c2df-4e8e-900e-e15fd07ebae4
	I0806 00:55:19.358127    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.358132    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.358135    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.358647    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d9c42","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe685526-4722-4113-b2b3-9a84182541b7","resourceVersion":"1300","creationTimestamp":"2024-08-06T07:52:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:52:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0806 00:55:19.554787    5434 request.go:629] Waited for 195.789836ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:19.554930    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:19.554938    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.554953    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.554961    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.557592    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.557606    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.557614    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.557618    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.557622    5434 round_trippers.go:580]     Audit-Id: fc423048-eadc-4e3d-838a-5bb5420a7872
	I0806 00:55:19.557625    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.557629    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.557633    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.557814    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m03","uid":"3008e7de-9d1d-41e0-b794-0ab4c70ffeba","resourceVersion":"1326","creationTimestamp":"2024-08-06T07:53:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_53_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:53:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0806 00:55:19.558036    5434 pod_ready.go:92] pod "kube-proxy-d9c42" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:19.558048    5434 pod_ready.go:81] duration metric: took 397.342388ms for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:19.558056    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:19.756400    5434 request.go:629] Waited for 198.278845ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:19.756502    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:19.756512    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.756524    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.756530    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.759100    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.759114    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.759120    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.759129    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.759134    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.759138    5434 round_trippers.go:580]     Audit-Id: 756a7ec3-521f-4c8f-b571-4f454c539bae
	I0806 00:55:19.759142    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.759145    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.759475    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"1416","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0806 00:55:19.954705    5434 request.go:629] Waited for 194.852458ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.954757    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.954765    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.954777    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.954784    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.957479    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.957495    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.957502    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:20 GMT
	I0806 00:55:19.957524    5434 round_trippers.go:580]     Audit-Id: b6ccac5a-1a70-456e-887f-92b77e90d08a
	I0806 00:55:19.957533    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.957537    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.957542    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.957546    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.957636    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:19.957890    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-scheduler-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.957903    5434 pod_ready.go:81] duration metric: took 399.83274ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:19.957911    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-scheduler-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.957918    5434 pod_ready.go:38] duration metric: took 1.333999093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:19.957935    5434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:55:19.968369    5434 command_runner.go:130] > -16
	I0806 00:55:19.968420    5434 ops.go:34] apiserver oom_adj: -16
	I0806 00:55:19.968427    5434 kubeadm.go:597] duration metric: took 8.744836312s to restartPrimaryControlPlane
	I0806 00:55:19.968433    5434 kubeadm.go:394] duration metric: took 8.765947423s to StartCluster
	I0806 00:55:19.968442    5434 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:19.968529    5434 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:19.968882    5434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:19.969192    5434 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:55:19.969203    5434 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:55:19.969323    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:20.011380    5434 out.go:177] * Verifying Kubernetes components...
	I0806 00:55:20.053491    5434 out.go:177] * Enabled addons: 
	I0806 00:55:20.074236    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:20.095069    5434 addons.go:510] duration metric: took 125.861598ms for enable addons: enabled=[]
	I0806 00:55:20.212476    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:55:20.224008    5434 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:55:20.224067    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:20.224072    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:20.224078    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:20.224080    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:20.225422    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:20.225431    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:20.225436    5434 round_trippers.go:580]     Audit-Id: bdea768a-0771-4f81-aaf0-72fff444e818
	I0806 00:55:20.225441    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:20.225444    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:20.225447    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:20.225449    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:20.225452    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:20 GMT
	I0806 00:55:20.225608    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:20.724322    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:20.724338    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:20.724343    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:20.724346    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:20.725688    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:20.725697    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:20.725702    5434 round_trippers.go:580]     Audit-Id: 47529c6c-2b8f-42fa-bbdc-49f1a87bfa63
	I0806 00:55:20.725705    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:20.725714    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:20.725718    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:20.725723    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:20.725725    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:20 GMT
	I0806 00:55:20.726105    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:21.226129    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:21.226178    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:21.226198    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:21.226204    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:21.228432    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:21.228446    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:21.228456    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:21.228489    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:21.228499    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:21 GMT
	I0806 00:55:21.228503    5434 round_trippers.go:580]     Audit-Id: 940861b1-1bee-4669-a633-b78c51fb0e01
	I0806 00:55:21.228507    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:21.228509    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:21.228685    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:21.724884    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:21.724905    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:21.724917    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:21.724923    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:21.727212    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:21.727229    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:21.727239    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:21.727246    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:21 GMT
	I0806 00:55:21.727270    5434 round_trippers.go:580]     Audit-Id: 39b8c42d-1a15-4b85-a44d-54efa33d7b3c
	I0806 00:55:21.727277    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:21.727281    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:21.727285    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:21.727451    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:22.224395    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:22.224413    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:22.224441    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:22.224446    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:22.226043    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:22.226056    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:22.226062    5434 round_trippers.go:580]     Audit-Id: c8e91ed2-2430-4518-8a7e-297131509505
	I0806 00:55:22.226068    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:22.226072    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:22.226082    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:22.226085    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:22.226096    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:22 GMT
	I0806 00:55:22.226168    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:22.226394    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:22.724217    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:22.724231    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:22.724238    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:22.724241    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:22.725884    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:22.725893    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:22.725898    5434 round_trippers.go:580]     Audit-Id: 8984b90e-4f21-48a0-9922-aa383b02e2e4
	I0806 00:55:22.725901    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:22.725904    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:22.725906    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:22.725921    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:22.725928    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:22 GMT
	I0806 00:55:22.726006    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:23.226197    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:23.226213    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:23.226222    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:23.226227    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:23.228077    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:23.228091    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:23.228098    5434 round_trippers.go:580]     Audit-Id: 05da540f-2de9-4c2b-a831-86d6b1a0af0c
	I0806 00:55:23.228104    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:23.228109    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:23.228113    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:23.228116    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:23.228121    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:23 GMT
	I0806 00:55:23.228329    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:23.724616    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:23.724634    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:23.724641    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:23.724647    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:23.726561    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:23.726570    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:23.726575    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:23.726579    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:23.726582    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:23.726585    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:23.726589    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:23 GMT
	I0806 00:55:23.726592    5434 round_trippers.go:580]     Audit-Id: 717421be-9914-41af-87cd-548074beffe0
	I0806 00:55:23.726870    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:24.224496    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:24.224520    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:24.224530    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:24.224536    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:24.227165    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:24.227180    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:24.227187    5434 round_trippers.go:580]     Audit-Id: d59bf722-b6ed-4856-97db-eeadf233cae4
	I0806 00:55:24.227193    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:24.227197    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:24.227203    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:24.227206    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:24.227211    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:24 GMT
	I0806 00:55:24.227590    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:24.227843    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:24.724366    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:24.724387    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:24.724399    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:24.724405    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:24.726945    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:24.726958    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:24.726968    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:24.726975    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:24.726980    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:24 GMT
	I0806 00:55:24.726985    5434 round_trippers.go:580]     Audit-Id: 8e42586d-311e-4466-b4e7-937ae9d22140
	I0806 00:55:24.726997    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:24.727002    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:24.727083    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:25.224290    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:25.224302    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:25.224308    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:25.224311    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:25.225919    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:25.225934    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:25.225942    5434 round_trippers.go:580]     Audit-Id: dbeceab5-b466-4bcf-927f-aa8125cf10e4
	I0806 00:55:25.225948    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:25.225954    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:25.225958    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:25.225961    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:25.225964    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:25 GMT
	I0806 00:55:25.226109    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:25.724558    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:25.724580    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:25.724592    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:25.724597    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:25.727186    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:25.727202    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:25.727209    5434 round_trippers.go:580]     Audit-Id: b4b2e1be-2bcf-4130-b3e7-f3cd59e84c3a
	I0806 00:55:25.727215    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:25.727219    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:25.727222    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:25.727226    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:25.727233    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:25 GMT
	I0806 00:55:25.727351    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:26.225182    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:26.225208    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:26.225220    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:26.225228    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:26.227610    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:26.227622    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:26.227630    5434 round_trippers.go:580]     Audit-Id: 2c486001-af75-4c3e-873b-f0aa48805906
	I0806 00:55:26.227634    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:26.227638    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:26.227641    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:26.227647    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:26.227651    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:26 GMT
	I0806 00:55:26.227868    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:26.228123    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:26.724703    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:26.724727    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:26.724738    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:26.724744    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:26.726496    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:26.726519    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:26.726533    5434 round_trippers.go:580]     Audit-Id: 1b3f8dbf-e2f2-4f76-bf76-cf65ddb488eb
	I0806 00:55:26.726549    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:26.726560    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:26.726565    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:26.726568    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:26.726589    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:26 GMT
	I0806 00:55:26.726842    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:27.226408    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:27.226444    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:27.226545    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:27.226554    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:27.228994    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:27.229009    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:27.229017    5434 round_trippers.go:580]     Audit-Id: 4a536acc-e002-4c95-a24d-c96441616539
	I0806 00:55:27.229025    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:27.229031    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:27.229038    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:27.229043    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:27.229049    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:27 GMT
	I0806 00:55:27.229272    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:27.725397    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:27.725417    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:27.725424    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:27.725428    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:27.727162    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:27.727172    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:27.727177    5434 round_trippers.go:580]     Audit-Id: efd01aab-f01c-4df6-8fd1-0677a66eabbb
	I0806 00:55:27.727180    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:27.727184    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:27.727187    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:27.727190    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:27.727192    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:27 GMT
	I0806 00:55:27.727269    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:28.225138    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:28.225166    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:28.225178    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:28.225184    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:28.228317    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:28.228332    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:28.228339    5434 round_trippers.go:580]     Audit-Id: 1920522c-27e2-428b-b9b0-32dfc742e256
	I0806 00:55:28.228349    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:28.228358    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:28.228364    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:28.228372    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:28.228379    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:28 GMT
	I0806 00:55:28.228805    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:28.229044    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:28.725143    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:28.725164    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:28.725175    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:28.725183    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:28.727602    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:28.727614    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:28.727622    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:28.727625    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:28.727629    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:28 GMT
	I0806 00:55:28.727634    5434 round_trippers.go:580]     Audit-Id: a229cf93-a9eb-4122-af98-feb607626cde
	I0806 00:55:28.727640    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:28.727646    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:28.727860    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:29.226062    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:29.226087    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:29.226100    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:29.226107    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:29.228452    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:29.228469    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:29.228479    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:29 GMT
	I0806 00:55:29.228486    5434 round_trippers.go:580]     Audit-Id: e691e116-e3f5-42b3-bb33-372df33e535e
	I0806 00:55:29.228493    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:29.228497    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:29.228502    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:29.228505    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:29.228645    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:29.724986    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:29.725011    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:29.725022    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:29.725030    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:29.727169    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:29.727183    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:29.727190    5434 round_trippers.go:580]     Audit-Id: b81b78a4-94f1-448e-91f8-4f23fa3af150
	I0806 00:55:29.727195    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:29.727201    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:29.727207    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:29.727210    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:29.727214    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:29 GMT
	I0806 00:55:29.727520    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1525","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0806 00:55:30.225653    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:30.225742    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.225756    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.225762    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.228535    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:30.228548    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.228555    5434 round_trippers.go:580]     Audit-Id: bdacd861-bd2b-4cf6-a7d0-225972f6913b
	I0806 00:55:30.228561    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.228566    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.228571    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.228580    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.228584    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.228987    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1525","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0806 00:55:30.229242    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:30.726557    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:30.726579    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.726588    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.726595    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.729268    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:30.729283    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.729290    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.729294    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.729297    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.729300    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.729303    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.729307    5434 round_trippers.go:580]     Audit-Id: c0bd55a9-f230-44de-82ce-98cc873c9c5b
	I0806 00:55:30.729453    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:30.729709    5434 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:55:30.729725    5434 node_ready.go:38] duration metric: took 10.505490137s for node "multinode-100000" to be "Ready" ...
	I0806 00:55:30.729734    5434 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:30.729775    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:30.729784    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.729791    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.729795    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.732583    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:30.732591    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.732596    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.732614    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.732620    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.732622    5434 round_trippers.go:580]     Audit-Id: 1220cc10-8f83-4c12-ba78-927ce112b5f3
	I0806 00:55:30.732625    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.732627    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.734117    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1534"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73675 chars]
	I0806 00:55:30.735649    5434 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:30.735685    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:30.735690    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.735704    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.735708    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.736915    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:30.736924    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.736929    5434 round_trippers.go:580]     Audit-Id: d3507374-ba87-4836-9b87-4357a5f97dc7
	I0806 00:55:30.736945    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.736950    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.736952    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.736954    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.736959    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.737082    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:30.737316    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:30.737323    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.737328    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.737332    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.738368    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:30.738376    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.738381    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.738395    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.738412    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.738420    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.738423    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.738425    5434 round_trippers.go:580]     Audit-Id: 808a8e75-27d6-42e8-b896-4d2236f6bef9
	I0806 00:55:30.738497    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:31.236013    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:31.236035    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.236046    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.236054    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.238975    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:31.238989    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.238996    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.239000    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.239003    5434 round_trippers.go:580]     Audit-Id: 391ad135-3678-429d-802e-74a7765536c8
	I0806 00:55:31.239006    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.239018    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.239022    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.239147    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:31.239529    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:31.239539    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.239546    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.239556    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.240953    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:31.240959    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.240963    5434 round_trippers.go:580]     Audit-Id: 19464df1-e3ea-45f2-94dd-4cb9f0465a30
	I0806 00:55:31.240966    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.240971    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.240976    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.240979    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.240983    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.241167    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:31.736051    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:31.736072    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.736084    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.736091    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.737509    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:31.737522    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.737528    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.737532    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.737535    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.737539    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.737542    5434 round_trippers.go:580]     Audit-Id: 5099062f-8493-491e-a4a1-1c46865d67f0
	I0806 00:55:31.737544    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.737757    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:31.738027    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:31.738033    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.738039    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.738043    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.739188    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:31.739201    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.739208    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.739212    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.739214    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.739216    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.739218    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.739221    5434 round_trippers.go:580]     Audit-Id: a820394a-4ac7-4381-ae5a-0ee548fc3466
	I0806 00:55:31.739417    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:32.236101    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:32.236129    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.236143    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.236152    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.238744    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:32.238759    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.238767    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.238771    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.238775    5434 round_trippers.go:580]     Audit-Id: b0ceeb15-7922-46f6-99b5-dac683aa46d7
	I0806 00:55:32.238779    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.238782    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.238804    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.238999    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:32.239388    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:32.239400    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.239408    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.239412    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.241071    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:32.241081    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.241088    5434 round_trippers.go:580]     Audit-Id: ab6be580-fa10-424a-9c1a-381050be71b5
	I0806 00:55:32.241093    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.241098    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.241101    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.241110    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.241114    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.241537    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:32.735995    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:32.736007    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.736012    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.736016    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.737431    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:32.737439    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.737444    5434 round_trippers.go:580]     Audit-Id: 382f9f72-ab4a-4536-b24d-b2b3e2e59685
	I0806 00:55:32.737447    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.737450    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.737454    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.737456    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.737459    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.737766    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:32.738042    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:32.738049    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.738054    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.738059    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.739904    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:32.739913    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.739918    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.739924    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.739930    5434 round_trippers.go:580]     Audit-Id: 55e12fe6-f961-453a-b23d-5fb17f5439e1
	I0806 00:55:32.739933    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.739938    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.739940    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.740006    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:32.740180    5434 pod_ready.go:102] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"False"
	I0806 00:55:33.236669    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:33.236689    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.236698    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.236704    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.239178    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:33.239191    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.239199    5434 round_trippers.go:580]     Audit-Id: 0a4356f5-75a2-4335-a2cc-66b2668d8196
	I0806 00:55:33.239205    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.239209    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.239213    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.239216    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.239223    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.239427    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1555","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7013 chars]
	I0806 00:55:33.239784    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:33.239794    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.239815    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.239819    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.240879    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:33.240887    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.240894    5434 round_trippers.go:580]     Audit-Id: 2369d202-002f-40d2-aceb-e1666583ff99
	I0806 00:55:33.240900    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.240905    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.240909    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.240926    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.240933    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.241144    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:33.736234    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:33.736268    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.736280    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.736286    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.738886    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:33.738898    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.738905    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.738910    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.738914    5434 round_trippers.go:580]     Audit-Id: 4f34e4e0-6fab-4bbf-a5a7-b9c6b54a911c
	I0806 00:55:33.738917    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.738921    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.738924    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.739384    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1555","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7013 chars]
	I0806 00:55:33.739745    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:33.739754    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.739762    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.739768    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.740922    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:33.740932    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.740939    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.740958    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.740963    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.740966    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.740968    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.740971    5434 round_trippers.go:580]     Audit-Id: dd1dcf4b-68af-4cac-a6ac-01ce652573be
	I0806 00:55:33.741127    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.236513    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:34.236536    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.236544    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.236551    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.239043    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:34.239057    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.239067    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.239073    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.239079    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.239096    5434 round_trippers.go:580]     Audit-Id: 183d64ec-69e1-479e-9a26-adaaae7a199d
	I0806 00:55:34.239104    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.239107    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.239237    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0806 00:55:34.239594    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.239603    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.239618    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.239625    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.241058    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.241065    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.241089    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.241119    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.241127    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.241135    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.241139    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.241142    5434 round_trippers.go:580]     Audit-Id: ff2044d7-5235-4b3a-89c1-75ac1ce9e438
	I0806 00:55:34.241223    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.241398    5434 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.241406    5434 pod_ready.go:81] duration metric: took 3.505678748s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.241413    5434 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.241441    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:55:34.241446    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.241451    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.241454    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.242496    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.242506    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.242514    5434 round_trippers.go:580]     Audit-Id: fb7bf067-5387-43ef-ad33-3a7388ee70ff
	I0806 00:55:34.242524    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.242527    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.242530    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.242532    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.242535    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.242649    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1536","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0806 00:55:34.242856    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.242863    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.242868    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.242872    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.243888    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.243896    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.243903    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.243906    5434 round_trippers.go:580]     Audit-Id: 71bcf85b-3031-49f2-87ff-bd80d7924d53
	I0806 00:55:34.243909    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.243912    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.243915    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.243918    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.244143    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.244309    5434 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.244317    5434 pod_ready.go:81] duration metric: took 2.898686ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.244325    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.244354    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:55:34.244359    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.244365    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.244369    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.245259    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:34.245266    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.245271    5434 round_trippers.go:580]     Audit-Id: 847e525d-9412-44ea-9956-67c962f6c612
	I0806 00:55:34.245274    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.245278    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.245281    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.245286    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.245289    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.245479    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"1538","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0806 00:55:34.245715    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.245722    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.245727    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.245731    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.246730    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:34.246738    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.246745    5434 round_trippers.go:580]     Audit-Id: e82f2502-2ac5-4a7b-9ae0-232b9e6b9705
	I0806 00:55:34.246750    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.246754    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.246758    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.246760    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.246762    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.246958    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.247119    5434 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.247126    5434 pod_ready.go:81] duration metric: took 2.79564ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.247138    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.247163    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:55:34.247167    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.247173    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.247177    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.248305    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.248316    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.248323    5434 round_trippers.go:580]     Audit-Id: 4a3a41e2-c462-4a3b-950f-498e978d7010
	I0806 00:55:34.248329    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.248332    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.248335    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.248338    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.248341    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.248548    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"1546","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0806 00:55:34.248779    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.248786    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.248792    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.248797    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.249891    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.249899    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.249905    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.249911    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.249916    5434 round_trippers.go:580]     Audit-Id: 428671cd-e6b7-4d7e-b95d-c3318369f09a
	I0806 00:55:34.249925    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.249928    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.249930    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.250080    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.250236    5434 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.250243    5434 pod_ready.go:81] duration metric: took 3.098846ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.250252    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.250275    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:55:34.250280    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.250285    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.250290    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.251318    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.251326    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.251331    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.251335    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.251339    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.251342    5434 round_trippers.go:580]     Audit-Id: f2525451-85fe-4e99-a438-f8fa068013c2
	I0806 00:55:34.251345    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.251348    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.251508    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"1541","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0806 00:55:34.251727    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.251734    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.251740    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.251743    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.252663    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:34.252671    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.252678    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.252680    5434 round_trippers.go:580]     Audit-Id: 86232fe5-5540-4c80-847d-fd7de8db40dd
	I0806 00:55:34.252684    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.252688    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.252691    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.252693    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.252835    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.253021    5434 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.253028    5434 pod_ready.go:81] duration metric: took 2.771874ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.253034    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.437762    5434 request.go:629] Waited for 184.675553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:34.437887    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:34.437901    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.437913    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.437920    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.440660    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:34.440681    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.440691    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.440726    5434 round_trippers.go:580]     Audit-Id: f785c571-090b-442e-a6a8-eec70e5f8bc9
	I0806 00:55:34.440739    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.440745    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.440752    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.440761    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.441141    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d9c42","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe685526-4722-4113-b2b3-9a84182541b7","resourceVersion":"1300","creationTimestamp":"2024-08-06T07:52:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:52:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0806 00:55:34.636584    5434 request.go:629] Waited for 195.11039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:34.636706    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:34.636716    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.636727    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.636737    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.638715    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.638730    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.638740    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.638750    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.638757    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.638764    5434 round_trippers.go:580]     Audit-Id: 16712eef-e2e5-4984-9a66-1e7088a908f1
	I0806 00:55:34.638768    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.638773    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.638994    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m03","uid":"3008e7de-9d1d-41e0-b794-0ab4c70ffeba","resourceVersion":"1326","creationTimestamp":"2024-08-06T07:53:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_53_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:53:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0806 00:55:34.639211    5434 pod_ready.go:92] pod "kube-proxy-d9c42" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.639222    5434 pod_ready.go:81] duration metric: took 386.175188ms for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.639231    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.837554    5434 request.go:629] Waited for 198.259941ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:34.837695    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:34.837706    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.837717    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.837722    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.840160    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:34.840190    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.840201    5434 round_trippers.go:580]     Audit-Id: 8daea4df-6291-470e-98df-86fa130a4477
	I0806 00:55:34.840207    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.840213    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.840220    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.840231    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.840236    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.840470    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"1547","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0806 00:55:35.036652    5434 request.go:629] Waited for 195.891289ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:35.036755    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:35.036767    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.036778    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.036799    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.039118    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:35.039132    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.039139    5434 round_trippers.go:580]     Audit-Id: bf7d8dbb-663c-4c2c-a231-cee56db0c11c
	I0806 00:55:35.039143    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.039145    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.039174    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.039185    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.039190    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.039279    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:55:35.039527    5434 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:35.039538    5434 pod_ready.go:81] duration metric: took 400.291605ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:35.039546    5434 pod_ready.go:38] duration metric: took 4.309719242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:35.039564    5434 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:55:35.039630    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:35.052189    5434 command_runner.go:130] > 1781
	I0806 00:55:35.052291    5434 api_server.go:72] duration metric: took 15.082787345s to wait for apiserver process to appear ...
	I0806 00:55:35.052303    5434 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:55:35.052313    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:35.055676    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:55:35.055708    5434 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:55:35.055713    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.055719    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.055723    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.056340    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:35.056348    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.056353    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.056358    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.056362    5434 round_trippers.go:580]     Content-Length: 263
	I0806 00:55:35.056364    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.056367    5434 round_trippers.go:580]     Audit-Id: 73154886-0ddc-48d8-83b9-2382f7a5c2a0
	I0806 00:55:35.056369    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.056372    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.056380    5434 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:55:35.056401    5434 api_server.go:141] control plane version: v1.30.3
	I0806 00:55:35.056409    5434 api_server.go:131] duration metric: took 4.101304ms to wait for apiserver health ...
	I0806 00:55:35.056414    5434 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:55:35.236933    5434 request.go:629] Waited for 180.477708ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.236987    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.236995    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.237051    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.237059    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.240656    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.240666    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.240671    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.240674    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.240694    5434 round_trippers.go:580]     Audit-Id: c968a226-1560-4786-b55a-0b60e1c84edb
	I0806 00:55:35.240716    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.240739    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.240746    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.241438    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0806 00:55:35.243028    5434 system_pods.go:59] 10 kube-system pods found
	I0806 00:55:35.243040    5434 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:55:35.243043    5434 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:55:35.243046    5434 system_pods.go:61] "kindnet-dn72w" [34a2c1f4-38da-4e95-8d44-d2eae75e5dcb] Running
	I0806 00:55:35.243049    5434 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:55:35.243052    5434 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:55:35.243054    5434 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:55:35.243057    5434 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:55:35.243060    5434 system_pods.go:61] "kube-proxy-d9c42" [fe685526-4722-4113-b2b3-9a84182541b7] Running
	I0806 00:55:35.243062    5434 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:55:35.243065    5434 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:55:35.243069    5434 system_pods.go:74] duration metric: took 186.64791ms to wait for pod list to return data ...
	I0806 00:55:35.243074    5434 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:55:35.437141    5434 request.go:629] Waited for 193.980924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:55:35.437265    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:55:35.437276    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.437286    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.437295    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.440447    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.440462    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.440469    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.440473    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.440477    5434 round_trippers.go:580]     Content-Length: 262
	I0806 00:55:35.440481    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.440487    5434 round_trippers.go:580]     Audit-Id: e1253ccf-74ad-4370-b01a-74c5f89d2d5b
	I0806 00:55:35.440491    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.440493    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.440507    5434 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:55:35.440658    5434 default_sa.go:45] found service account: "default"
	I0806 00:55:35.440671    5434 default_sa.go:55] duration metric: took 197.58859ms for default service account to be created ...
	I0806 00:55:35.440682    5434 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:55:35.637354    5434 request.go:629] Waited for 196.567541ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.637426    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.637435    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.637469    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.637484    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.640994    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.641005    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.641010    5434 round_trippers.go:580]     Audit-Id: 75dc9c49-a29e-4d79-9d8b-13c10799dcee
	I0806 00:55:35.641014    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.641017    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.641020    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.641023    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.641027    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.641703    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0806 00:55:35.643246    5434 system_pods.go:86] 10 kube-system pods found
	I0806 00:55:35.643257    5434 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:55:35.643262    5434 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:55:35.643267    5434 system_pods.go:89] "kindnet-dn72w" [34a2c1f4-38da-4e95-8d44-d2eae75e5dcb] Running
	I0806 00:55:35.643271    5434 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:55:35.643275    5434 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:55:35.643279    5434 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:55:35.643283    5434 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:55:35.643286    5434 system_pods.go:89] "kube-proxy-d9c42" [fe685526-4722-4113-b2b3-9a84182541b7] Running
	I0806 00:55:35.643297    5434 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:55:35.643300    5434 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:55:35.643306    5434 system_pods.go:126] duration metric: took 202.613344ms to wait for k8s-apps to be running ...
	I0806 00:55:35.643314    5434 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:55:35.643362    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:55:35.655159    5434 system_svc.go:56] duration metric: took 11.839973ms WaitForService to wait for kubelet
	I0806 00:55:35.655174    5434 kubeadm.go:582] duration metric: took 15.685657412s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:55:35.655187    5434 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:55:35.837513    5434 request.go:629] Waited for 182.238504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:55:35.837562    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:55:35.837575    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.837674    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.837681    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.840771    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.840788    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.840797    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.840801    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.840805    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.840809    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.840813    5434 round_trippers.go:580]     Audit-Id: d5fa65d8-9d75-48e1-a19e-fc8717ce8edd
	I0806 00:55:35.840818    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.840995    5434 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0806 00:55:35.841382    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:35.841395    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:35.841404    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:35.841409    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:35.841418    5434 node_conditions.go:105] duration metric: took 186.219515ms to run NodePressure ...
	I0806 00:55:35.841429    5434 start.go:241] waiting for startup goroutines ...
	I0806 00:55:35.841437    5434 start.go:246] waiting for cluster config update ...
	I0806 00:55:35.841445    5434 start.go:255] writing updated cluster config ...
	I0806 00:55:35.863202    5434 out.go:177] 
	I0806 00:55:35.883985    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:35.884076    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:55:35.905924    5434 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:55:35.947857    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:55:35.947891    5434 cache.go:56] Caching tarball of preloaded images
	I0806 00:55:35.948065    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:55:35.948085    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:55:35.948216    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:55:35.949041    5434 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:55:35.949141    5434 start.go:364] duration metric: took 76.368µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:55:35.949168    5434 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:55:35.949175    5434 fix.go:54] fixHost starting: m02
	I0806 00:55:35.949547    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:35.949564    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:35.958609    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53096
	I0806 00:55:35.958994    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:35.959363    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:35.959380    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:35.959624    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:35.959754    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:35.959842    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:55:35.959924    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:35.959995    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:55:35.960908    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid 4427 missing from process table
	I0806 00:55:35.960925    5434 fix.go:112] recreateIfNeeded on multinode-100000-m02: state=Stopped err=<nil>
	I0806 00:55:35.960936    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	W0806 00:55:35.961012    5434 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:55:35.981954    5434 out.go:177] * Restarting existing hyperkit VM for "multinode-100000-m02" ...
	I0806 00:55:36.023997    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .Start
	I0806 00:55:36.024279    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:36.024350    5434 main.go:141] libmachine: (multinode-100000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:55:36.026126    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid 4427 missing from process table
	I0806 00:55:36.026148    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | pid 4427 is in state "Stopped"
	I0806 00:55:36.026165    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid...
	I0806 00:55:36.026384    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:55:36.053863    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:55:36.053890    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:55:36.054036    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:55:36.054065    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:55:36.054112    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:55:36.054150    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:55:36.054170    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:55:36.055617    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Pid is 5480
	I0806 00:55:36.056013    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:55:36.056032    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:36.056086    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 5480
	I0806 00:55:36.058061    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:55:36.058156    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0806 00:55:36.058180    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b32856}
	I0806 00:55:36.058195    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b327da}
	I0806 00:55:36.058205    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:55:36.058212    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:55:36.058221    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:55:36.058273    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:55:36.058939    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:36.059162    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:55:36.059607    5434 machine.go:94] provisionDockerMachine start ...
	I0806 00:55:36.059631    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:36.059771    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:36.059905    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:36.060011    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:36.060138    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:36.060215    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:36.060317    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:36.060488    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:36.060498    5434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:55:36.063411    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:55:36.071802    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:55:36.072735    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:55:36.072761    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:55:36.072772    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:55:36.072788    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:55:36.457976    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:55:36.457992    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:55:36.572891    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:55:36.572918    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:55:36.572926    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:55:36.572933    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:55:36.573761    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:55:36.573770    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:55:42.151666    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:55:42.151706    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:55:42.151714    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:55:42.175264    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:55:47.123974    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:55:47.123989    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:55:47.124115    5434 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:55:47.124127    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:55:47.124228    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.124335    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.124426    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.124515    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.124628    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.124758    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.124888    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.124896    5434 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:55:47.193924    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:55:47.193947    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.194084    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.194175    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.194277    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.194381    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.194556    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.194713    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.194725    5434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:55:47.260861    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:55:47.260877    5434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:55:47.260890    5434 buildroot.go:174] setting up certificates
	I0806 00:55:47.260897    5434 provision.go:84] configureAuth start
	I0806 00:55:47.260905    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:55:47.261040    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:47.261134    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.261216    5434 provision.go:143] copyHostCerts
	I0806 00:55:47.261245    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:47.261296    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:55:47.261302    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:47.261431    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:55:47.261631    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:47.261668    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:55:47.261673    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:47.261752    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:55:47.261912    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:47.261943    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:55:47.261948    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:47.262015    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:55:47.262174    5434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:55:47.800015    5434 provision.go:177] copyRemoteCerts
	I0806 00:55:47.800090    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:55:47.800110    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.800265    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.800359    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.800444    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.800586    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:47.835822    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:55:47.835891    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:55:47.855534    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:55:47.855602    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:55:47.875212    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:55:47.875294    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:55:47.894813    5434 provision.go:87] duration metric: took 633.894969ms to configureAuth
	I0806 00:55:47.894825    5434 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:55:47.894996    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:47.895010    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:47.895165    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.895256    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.895340    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.895413    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.895512    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.895632    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.895760    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.895768    5434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:55:47.960699    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:55:47.960711    5434 buildroot.go:70] root file system type: tmpfs
	I0806 00:55:47.960788    5434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:55:47.960800    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.960931    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.961018    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.961118    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.961201    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.961325    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.961472    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.961517    5434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:55:48.030832    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:55:48.030856    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:48.030994    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:48.031096    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:48.031218    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:48.031324    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:48.031453    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:48.031608    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:48.031622    5434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:55:49.580688    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:55:49.580712    5434 machine.go:97] duration metric: took 13.520821195s to provisionDockerMachine
	I0806 00:55:49.580721    5434 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:55:49.580729    5434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:55:49.580741    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.580935    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:55:49.580949    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:49.581045    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.581137    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.581219    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.581315    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:49.618661    5434 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:55:49.621544    5434 command_runner.go:130] > NAME=Buildroot
	I0806 00:55:49.621558    5434 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:55:49.621564    5434 command_runner.go:130] > ID=buildroot
	I0806 00:55:49.621570    5434 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:55:49.621577    5434 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:55:49.621634    5434 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:55:49.621644    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:55:49.621733    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:55:49.621868    5434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:55:49.621877    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:55:49.622032    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:55:49.629218    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:49.648584    5434 start.go:296] duration metric: took 67.854005ms for postStartSetup
	I0806 00:55:49.648604    5434 fix.go:56] duration metric: took 13.699160102s for fixHost
	I0806 00:55:49.648620    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:49.648751    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.648847    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.648955    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.649053    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.649169    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:49.649301    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:49.649308    5434 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:55:49.708465    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930949.775003266
	
	I0806 00:55:49.708477    5434 fix.go:216] guest clock: 1722930949.775003266
	I0806 00:55:49.708483    5434 fix.go:229] Guest: 2024-08-06 00:55:49.775003266 -0700 PDT Remote: 2024-08-06 00:55:49.648611 -0700 PDT m=+56.909349334 (delta=126.392266ms)
	I0806 00:55:49.708493    5434 fix.go:200] guest clock delta is within tolerance: 126.392266ms
	I0806 00:55:49.708497    5434 start.go:83] releasing machines lock for "multinode-100000-m02", held for 13.759075291s
	I0806 00:55:49.708513    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.708635    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:49.732535    5434 out.go:177] * Found network options:
	I0806 00:55:49.751749    5434 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:55:49.772913    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:55:49.772952    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.773817    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.774060    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.774180    5434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:55:49.774221    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:55:49.774299    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:55:49.774413    5434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:55:49.774425    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.774433    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:49.774632    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.774664    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.774853    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.774879    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.775039    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:49.775067    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.775186    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:49.810620    5434 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:55:49.810895    5434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:55:49.810953    5434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:55:49.857296    5434 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:55:49.857332    5434 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:55:49.857355    5434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:55:49.857365    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:49.857468    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:49.872692    5434 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:55:49.873028    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:55:49.882153    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:55:49.890973    5434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:55:49.891028    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:55:49.899958    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:49.908743    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:55:49.917593    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:49.926553    5434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:55:49.935690    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:55:49.948327    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:55:49.962759    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:55:49.973687    5434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:55:49.984291    5434 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:55:49.984563    5434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:55:49.996230    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:50.092608    5434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:55:50.109699    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:50.109769    5434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:55:50.121516    5434 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:55:50.121774    5434 command_runner.go:130] > [Unit]
	I0806 00:55:50.121784    5434 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:55:50.121789    5434 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:55:50.121793    5434 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:55:50.121797    5434 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:55:50.121802    5434 command_runner.go:130] > StartLimitBurst=3
	I0806 00:55:50.121810    5434 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:55:50.121814    5434 command_runner.go:130] > [Service]
	I0806 00:55:50.121817    5434 command_runner.go:130] > Type=notify
	I0806 00:55:50.121820    5434 command_runner.go:130] > Restart=on-failure
	I0806 00:55:50.121824    5434 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:55:50.121830    5434 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:55:50.121835    5434 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:55:50.121841    5434 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:55:50.121847    5434 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:55:50.121852    5434 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:55:50.121857    5434 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:55:50.121866    5434 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:55:50.121876    5434 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:55:50.121882    5434 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:55:50.121885    5434 command_runner.go:130] > ExecStart=
	I0806 00:55:50.121901    5434 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:55:50.121909    5434 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:55:50.121916    5434 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:55:50.121921    5434 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:55:50.121925    5434 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:55:50.121930    5434 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:55:50.121935    5434 command_runner.go:130] > LimitCORE=infinity
	I0806 00:55:50.121942    5434 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:55:50.121947    5434 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:55:50.121951    5434 command_runner.go:130] > TasksMax=infinity
	I0806 00:55:50.121955    5434 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:55:50.121985    5434 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:55:50.121992    5434 command_runner.go:130] > Delegate=yes
	I0806 00:55:50.121997    5434 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:55:50.122010    5434 command_runner.go:130] > KillMode=process
	I0806 00:55:50.122016    5434 command_runner.go:130] > [Install]
	I0806 00:55:50.122022    5434 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:55:50.122096    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:50.139137    5434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:55:50.154045    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:50.165113    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:50.175785    5434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:55:50.197105    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:50.207733    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:50.223158    5434 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:55:50.223413    5434 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:55:50.226354    5434 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:55:50.226523    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:55:50.233762    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:55:50.247450    5434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:55:50.342692    5434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:55:50.443763    5434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:55:50.443793    5434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:55:50.457932    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:50.549367    5434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:55:52.862125    5434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.312693665s)
	I0806 00:55:52.862187    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:55:52.872409    5434 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:55:52.885181    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:52.895674    5434 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:55:52.993698    5434 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:55:53.084996    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:53.177209    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:55:53.191294    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:53.202769    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:53.315208    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:55:53.375448    5434 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:55:53.375521    5434 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:55:53.379714    5434 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:55:53.379725    5434 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:55:53.379729    5434 command_runner.go:130] > Device: 0,22	Inode: 749         Links: 1
	I0806 00:55:53.379738    5434 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:55:53.379743    5434 command_runner.go:130] > Access: 2024-08-06 07:55:53.393995737 +0000
	I0806 00:55:53.379752    5434 command_runner.go:130] > Modify: 2024-08-06 07:55:53.393995737 +0000
	I0806 00:55:53.379756    5434 command_runner.go:130] > Change: 2024-08-06 07:55:53.395995436 +0000
	I0806 00:55:53.379759    5434 command_runner.go:130] >  Birth: -
	I0806 00:55:53.379848    5434 start.go:563] Will wait 60s for crictl version
	I0806 00:55:53.379892    5434 ssh_runner.go:195] Run: which crictl
	I0806 00:55:53.382613    5434 command_runner.go:130] > /usr/bin/crictl
	I0806 00:55:53.382774    5434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:55:53.409192    5434 command_runner.go:130] > Version:  0.1.0
	I0806 00:55:53.409227    5434 command_runner.go:130] > RuntimeName:  docker
	I0806 00:55:53.409267    5434 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:55:53.409350    5434 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:55:53.410603    5434 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:55:53.410671    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:53.426368    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:53.427211    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:53.444242    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:53.466673    5434 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:55:53.508034    5434 out.go:177]   - env NO_PROXY=192.169.0.13
	I0806 00:55:53.529420    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:53.529824    5434 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:55:53.534548    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:53.544263    5434 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:55:53.544442    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:53.544650    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:53.544664    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:53.553344    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53117
	I0806 00:55:53.553689    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:53.554007    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:53.554017    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:53.554209    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:53.554331    5434 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:55:53.554416    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:53.554495    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:55:53.555418    5434 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:55:53.555667    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:53.555683    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:53.564417    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53119
	I0806 00:55:53.564918    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:53.565269    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:53.565286    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:53.565510    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:53.565629    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:53.565741    5434 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.14
	I0806 00:55:53.565747    5434 certs.go:194] generating shared ca certs ...
	I0806 00:55:53.565760    5434 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:53.565915    5434 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:55:53.565968    5434 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:55:53.565978    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:55:53.566002    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:55:53.566021    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:55:53.566039    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:55:53.566128    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:55:53.566170    5434 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:55:53.566180    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:55:53.566213    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:55:53.566246    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:55:53.566280    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:55:53.566352    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:53.566388    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.566408    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.566426    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.566457    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:55:53.586672    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:55:53.606199    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:55:53.625918    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:55:53.647471    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:55:53.667119    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:55:53.686966    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:55:53.706845    5434 ssh_runner.go:195] Run: openssl version
	I0806 00:55:53.711070    5434 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:55:53.711296    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:55:53.719956    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.723288    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.723381    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.723416    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.727666    5434 command_runner.go:130] > 3ec20f2e
	I0806 00:55:53.727892    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:55:53.736269    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:55:53.744816    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.748287    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.748379    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.748413    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.752496    5434 command_runner.go:130] > b5213941
	I0806 00:55:53.752660    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:55:53.761114    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:55:53.769715    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.773215    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.773323    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.773361    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.777430    5434 command_runner.go:130] > 51391683
	I0806 00:55:53.777626    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:55:53.786231    5434 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:55:53.789398    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:55:53.789477    5434 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:55:53.789506    5434 kubeadm.go:934] updating node {m02 192.169.0.14 8443 v1.30.3 docker false true} ...
	I0806 00:55:53.789573    5434 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:55:53.789639    5434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:55:53.796954    5434 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0806 00:55:53.796973    5434 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0806 00:55:53.797009    5434 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0806 00:55:53.804541    5434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0806 00:55:53.804541    5434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0806 00:55:53.804555    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 00:55:53.804559    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 00:55:53.804541    5434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0806 00:55:53.804607    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:55:53.804661    5434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 00:55:53.804681    5434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 00:55:53.816499    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 00:55:53.816516    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 00:55:53.816499    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 00:55:53.816527    5434 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 00:55:53.816546    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0806 00:55:53.816560    5434 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 00:55:53.816578    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0806 00:55:53.816648    5434 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 00:55:53.829730    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 00:55:53.831208    5434 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 00:55:53.831243    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0806 00:55:54.414827    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0806 00:55:54.422218    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0806 00:55:54.435907    5434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:55:54.449543    5434 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:55:54.452507    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:54.461871    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:54.556386    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:55:54.571102    5434 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:55:54.571378    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.571396    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.580196    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53121
	I0806 00:55:54.580556    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.580896    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.580908    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.581102    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.581228    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:54.581317    5434 start.go:317] joinCluster: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:54.581413    5434 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:55:54.581430    5434 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:55:54.581682    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.581718    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.590743    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53123
	I0806 00:55:54.591100    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.591441    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.591451    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.591650    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.591769    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:54.591858    5434 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:55:54.592019    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:54.592247    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.592264    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.601054    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53125
	I0806 00:55:54.601443    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.601863    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.601879    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.602097    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.602211    5434 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:55:54.602312    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:54.602385    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:55:54.603346    5434 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:55:54.603595    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.603624    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.612349    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53127
	I0806 00:55:54.612687    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.613044    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.613056    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.613246    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.613351    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:54.613444    5434 api_server.go:166] Checking apiserver status ...
	I0806 00:55:54.613491    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:54.613502    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:54.613579    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:54.613651    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:54.613732    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:54.613808    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:54.655316    5434 command_runner.go:130] > 1781
	I0806 00:55:54.655501    5434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1781/cgroup
	W0806 00:55:54.662582    5434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1781/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:55:54.662643    5434 ssh_runner.go:195] Run: ls
	I0806 00:55:54.665808    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:54.668845    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:55:54.668896    5434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-100000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0806 00:55:54.733926    5434 command_runner.go:130] ! Error from server (NotFound): nodes "multinode-100000-m02" not found
	W0806 00:55:54.734037    5434 node.go:126] kubectl drain node "multinode-100000-m02" failed (will continue): sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-100000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-100000-m02" not found
	I0806 00:55:54.734070    5434 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0806 00:55:54.734088    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:54.734240    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:54.734347    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:54.734435    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:54.734517    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:54.797275    5434 command_runner.go:130] ! W0806 07:55:54.866759    1260 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0806 00:55:54.823087    5434 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:55:54.823102    5434 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0806 00:55:54.823107    5434 command_runner.go:130] > [reset] Stopping the kubelet service
	I0806 00:55:54.823111    5434 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0806 00:55:54.823127    5434 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0806 00:55:54.823144    5434 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0806 00:55:54.823151    5434 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0806 00:55:54.823162    5434 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0806 00:55:54.823168    5434 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0806 00:55:54.823174    5434 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0806 00:55:54.823178    5434 command_runner.go:130] > to reset your system's IPVS tables.
	I0806 00:55:54.823184    5434 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0806 00:55:54.823193    5434 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0806 00:55:54.823204    5434 node.go:155] successfully reset node "multinode-100000-m02"
	I0806 00:55:54.823484    5434 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:54.823679    5434 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1231e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:55:54.823941    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:54.823978    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:54.823982    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:54.823989    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:54.823992    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:54.823995    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:54.825954    5434 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0806 00:55:54.825963    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:54.825968    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:54.825971    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:54.825974    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:54 GMT
	I0806 00:55:54.825977    5434 round_trippers.go:580]     Audit-Id: f9cc527f-3ff5-4bdd-b5d8-c4395c20aaeb
	I0806 00:55:54.825980    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:54.825983    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:54.825986    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:54.825995    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:54.826112    5434 retry.go:31] will retry after 400.706988ms: nodes "multinode-100000-m02" not found
	I0806 00:55:55.227941    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:55.228047    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:55.228058    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:55.228073    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:55.228078    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:55.228084    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:55.230674    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:55.230689    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:55.230699    5434 round_trippers.go:580]     Audit-Id: 40f81924-8d65-45c8-a203-7d049f0949e2
	I0806 00:55:55.230709    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:55.230714    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:55.230721    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:55.230726    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:55.230731    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:55.230735    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:55 GMT
	I0806 00:55:55.230778    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:55.230842    5434 retry.go:31] will retry after 1.108023885s: nodes "multinode-100000-m02" not found
	I0806 00:55:56.340676    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:56.340736    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:56.340746    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:56.340758    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:56.340764    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:56.340768    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:56.343215    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:56.343230    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:56.343238    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:56.343243    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:56.343246    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:56 GMT
	I0806 00:55:56.343249    5434 round_trippers.go:580]     Audit-Id: 2636c4bd-bdb6-46cd-9b18-05ba8b8e091f
	I0806 00:55:56.343253    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:56.343257    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:56.343260    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:56.343274    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:56.343331    5434 retry.go:31] will retry after 1.598856034s: nodes "multinode-100000-m02" not found
	I0806 00:55:57.943718    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:57.943867    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:57.943879    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:57.943891    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:57.943899    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:57.943909    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:57.946570    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:57.946586    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:57.946594    5434 round_trippers.go:580]     Audit-Id: e8afe910-bb1b-449c-a493-ca9d08761708
	I0806 00:55:57.946598    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:57.946602    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:57.946605    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:57.946608    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:57.946613    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:57.946616    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:58 GMT
	I0806 00:55:57.946629    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:57.946696    5434 retry.go:31] will retry after 1.373802876s: nodes "multinode-100000-m02" not found
	I0806 00:55:59.322365    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:59.322541    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:59.322565    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:59.322578    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:59.322588    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:59.322596    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:59.324950    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:59.324969    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:59.324985    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:59.324989    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:59.324993    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:59.324997    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:59.325001    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:59.325006    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:59 GMT
	I0806 00:55:59.325010    5434 round_trippers.go:580]     Audit-Id: e2af268c-a73a-490e-ab45-ea3236b146b1
	I0806 00:55:59.325022    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:59.325079    5434 retry.go:31] will retry after 3.775436146s: nodes "multinode-100000-m02" not found
	I0806 00:56:03.102194    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:03.102283    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:03.102292    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:03.102303    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:03.102311    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:03.102317    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:03.104958    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:03.104973    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:03.104980    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:03.104985    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:03.104989    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:03.104993    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:03 GMT
	I0806 00:56:03.104998    5434 round_trippers.go:580]     Audit-Id: 98aca243-affb-43c7-9161-6014c5c31359
	I0806 00:56:03.105003    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:03.105007    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:03.105025    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:03.105086    5434 retry.go:31] will retry after 4.446851201s: nodes "multinode-100000-m02" not found
	I0806 00:56:07.553307    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:07.553452    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:07.553463    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:07.553474    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:07.553481    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:07.553487    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:07.556261    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:07.556275    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:07.556282    5434 round_trippers.go:580]     Audit-Id: 8cbfa2ad-f1e0-435a-814d-b9df93541a97
	I0806 00:56:07.556305    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:07.556314    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:07.556320    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:07.556325    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:07.556328    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:07.556333    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:07 GMT
	I0806 00:56:07.556352    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:07.556412    5434 retry.go:31] will retry after 7.516844959s: nodes "multinode-100000-m02" not found
	I0806 00:56:15.073758    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:15.073882    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:15.073893    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:15.073902    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:15.073918    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:15.073927    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:15.076399    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:15.076414    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:15.076421    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:15.076425    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:15.076429    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:15.076433    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:15.076437    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:15 GMT
	I0806 00:56:15.076441    5434 round_trippers.go:580]     Audit-Id: eccd1fc1-72fa-4e6e-b254-5b88385411f9
	I0806 00:56:15.076446    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:15.076458    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:15.076536    5434 retry.go:31] will retry after 10.77059598s: nodes "multinode-100000-m02" not found
	I0806 00:56:25.849418    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:25.849492    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:25.849503    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:25.849515    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:25.849531    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:25.849537    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:25.852375    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:25.852390    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:25.852398    5434 round_trippers.go:580]     Audit-Id: dfaf584b-983a-4352-8ddc-170ef007830f
	I0806 00:56:25.852403    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:25.852407    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:25.852411    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:25.852415    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:25.852420    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:25.852424    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:25 GMT
	I0806 00:56:25.852442    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:25.852512    5434 retry.go:31] will retry after 10.459387207s: nodes "multinode-100000-m02" not found
	I0806 00:56:36.312777    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:36.312919    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:36.312928    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:36.312938    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:36.312946    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:36.312950    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:36.315607    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:36.315631    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:36.315640    5434 round_trippers.go:580]     Audit-Id: cdb9cd6a-848e-43df-878d-ce6bd2124463
	I0806 00:56:36.315644    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:36.315648    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:36.315653    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:36.315660    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:36.315665    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:36.315669    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:36 GMT
	I0806 00:56:36.315686    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:36.315747    5434 retry.go:31] will retry after 23.324068664s: nodes "multinode-100000-m02" not found
	I0806 00:56:59.641144    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:59.641201    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:59.641225    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:59.641239    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:59.641249    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:59.641257    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:59.643697    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:59.643714    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:59.643720    5434 round_trippers.go:580]     Audit-Id: 38b7f694-4a26-4adb-ab3a-228ffe36e476
	I0806 00:56:59.643724    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:59.643727    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:59.643731    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:59.643735    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:59.643738    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:59.643741    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:59 GMT
	I0806 00:56:59.643754    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:59.643814    5434 retry.go:31] will retry after 37.697414419s: nodes "multinode-100000-m02" not found
	I0806 00:57:37.342702    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:57:37.342800    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:37.342810    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:37.342821    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:37.342830    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:37.342835    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:57:37.345526    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:57:37.345538    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:37.345545    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:37.345550    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:37.345555    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:37.345566    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:57:37.345570    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:37 GMT
	I0806 00:57:37.345573    5434 round_trippers.go:580]     Audit-Id: b9d864fc-012a-495b-95ff-adf144d59a54
	I0806 00:57:37.345578    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:37.345617    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	E0806 00:57:37.345675    5434 node.go:177] kubectl delete node "multinode-100000-m02" failed: nodes "multinode-100000-m02" not found
	E0806 00:57:37.345697    5434 start.go:332] error removing existing worker node "m02" before rejoining cluster, will continue anyway: nodes "multinode-100000-m02" not found
	I0806 00:57:37.345704    5434 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:57:37.345721    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0806 00:57:37.345736    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:57:37.345908    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:57:37.346039    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:57:37.346180    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:57:37.346287    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:57:37.440608    5434 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7th74k.mbppog0s62qzrc0x --discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:57:37.440643    5434 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:57:37.440664    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7th74k.mbppog0s62qzrc0x --discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-100000-m02"
	I0806 00:57:37.471583    5434 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:57:37.570516    5434 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0806 00:57:37.570539    5434 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0806 00:57:37.602396    5434 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:57:37.602415    5434 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:57:37.602420    5434 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:57:37.711821    5434 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:57:38.219224    5434 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 507.362685ms
	I0806 00:57:38.219246    5434 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0806 00:57:38.228999    5434 command_runner.go:130] > This node has joined the cluster:
	I0806 00:57:38.229014    5434 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0806 00:57:38.229019    5434 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0806 00:57:38.229024    5434 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0806 00:57:38.230556    5434 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:57:38.230752    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0806 00:57:38.455515    5434 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0806 00:57:38.455597    5434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000-m02 minikube.k8s.io/updated_at=2024_08_06T00_57_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=false
	I0806 00:57:38.532968    5434 command_runner.go:130] > node/multinode-100000-m02 labeled
	I0806 00:57:38.534104    5434 start.go:319] duration metric: took 1m43.950741215s to joinCluster
	I0806 00:57:38.534142    5434 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:57:38.534348    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:57:38.556945    5434 out.go:177] * Verifying Kubernetes components...
	I0806 00:57:38.616434    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:57:38.711085    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:57:38.723307    5434 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:57:38.723529    5434 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1231e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:57:38.723727    5434 node_ready.go:35] waiting up to 6m0s for node "multinode-100000-m02" to be "Ready" ...
	I0806 00:57:38.723769    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:38.723773    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:38.723779    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:38.723783    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:38.725173    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:38.725181    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:38.725191    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:38.725197    5434 round_trippers.go:580]     Content-Length: 3920
	I0806 00:57:38.725201    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:38 GMT
	I0806 00:57:38.725205    5434 round_trippers.go:580]     Audit-Id: 3bab3b05-f565-4c20-9491-957d949d06b6
	I0806 00:57:38.725209    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:38.725215    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:38.725219    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:38.725301    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1698","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 2896 chars]
	I0806 00:57:39.223944    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:39.223970    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:39.223980    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:39.224064    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:39.226799    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:39.226812    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:39.226819    5434 round_trippers.go:580]     Audit-Id: c3e00223-64dc-4c31-ae51-85dc85e235a3
	I0806 00:57:39.226822    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:39.226826    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:39.226830    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:39.226833    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:39.226837    5434 round_trippers.go:580]     Content-Length: 3920
	I0806 00:57:39.226842    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:39 GMT
	I0806 00:57:39.226909    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1698","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 2896 chars]
	I0806 00:57:39.724720    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:39.724741    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:39.724753    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:39.724761    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:39.727231    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:39.727246    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:39.727254    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:39.727259    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:39 GMT
	I0806 00:57:39.727263    5434 round_trippers.go:580]     Audit-Id: b5469a79-afa2-4116-b64c-cce3265be3e2
	I0806 00:57:39.727266    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:39.727273    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:39.727276    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:39.727280    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:39.727343    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:40.224760    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:40.224779    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:40.224787    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:40.224793    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:40.226812    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:40.226822    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:40.226827    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:40.226829    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:40.226832    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:40.226834    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:40.226837    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:40 GMT
	I0806 00:57:40.226840    5434 round_trippers.go:580]     Audit-Id: 35941e6b-5c8b-4737-aee3-5730c81b0175
	I0806 00:57:40.226843    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:40.226886    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:40.726095    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:40.726116    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:40.726128    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:40.726135    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:40.728529    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:40.728544    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:40.728550    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:40.728554    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:40 GMT
	I0806 00:57:40.728557    5434 round_trippers.go:580]     Audit-Id: 9616e15e-e7ac-4eef-8533-a6e9e989cd1d
	I0806 00:57:40.728562    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:40.728571    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:40.728574    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:40.728577    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:40.728638    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:40.728834    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:41.224765    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:41.224781    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:41.224794    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:41.224799    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:41.226476    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:41.226488    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:41.226494    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:41.226498    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:41.226502    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:41.226516    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:41 GMT
	I0806 00:57:41.226523    5434 round_trippers.go:580]     Audit-Id: c547ed13-66a3-476a-8a5c-3e377cd019d1
	I0806 00:57:41.226526    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:41.226528    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:41.226583    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:41.725236    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:41.725248    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:41.725259    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:41.725277    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:41.726907    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:41.726930    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:41.726937    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:41 GMT
	I0806 00:57:41.726942    5434 round_trippers.go:580]     Audit-Id: 3caf33af-9cac-487e-b195-58119f24d22e
	I0806 00:57:41.726945    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:41.726951    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:41.726955    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:41.726957    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:41.726960    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:41.727005    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:42.224914    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:42.224930    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:42.224937    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:42.224940    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:42.226393    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:42.226406    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:42.226413    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:42.226418    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:42.226421    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:42.226424    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:42.226428    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:42 GMT
	I0806 00:57:42.226438    5434 round_trippers.go:580]     Audit-Id: 8776b9b1-351f-4d08-86af-84f455f47b75
	I0806 00:57:42.226441    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:42.226470    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:42.723974    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:42.724030    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:42.724036    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:42.724040    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:42.725742    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:42.725755    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:42.725764    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:42.725768    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:42.725772    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:42.725785    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:42.725791    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:42 GMT
	I0806 00:57:42.725794    5434 round_trippers.go:580]     Audit-Id: a21dfbc2-bc52-48a4-92db-8cb89208b4f7
	I0806 00:57:42.725797    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:42.725850    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:43.224069    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:43.224081    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:43.224087    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:43.224090    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:43.225620    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:43.225631    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:43.225637    5434 round_trippers.go:580]     Audit-Id: 0c1fa402-7a17-4180-ad97-85a88e052223
	I0806 00:57:43.225640    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:43.225650    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:43.225655    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:43.225657    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:43.225660    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:43.225666    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:43 GMT
	I0806 00:57:43.225708    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:43.225863    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:43.725011    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:43.725026    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:43.725032    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:43.725036    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:43.726950    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:43.726962    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:43.726968    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:43.726972    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:43.726974    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:43.726977    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:43.726980    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:43.726983    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:43 GMT
	I0806 00:57:43.726987    5434 round_trippers.go:580]     Audit-Id: 2ec77324-ad35-4954-86c3-1cd63f932963
	I0806 00:57:43.727037    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:44.224193    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:44.224209    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:44.224216    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:44.224219    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:44.225890    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:44.225904    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:44.225913    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:44.225919    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:44.225925    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:44.225929    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:44.225933    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:44 GMT
	I0806 00:57:44.225936    5434 round_trippers.go:580]     Audit-Id: ee0f4f06-8dc0-48ba-a078-5881e2527bb5
	I0806 00:57:44.225940    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:44.225999    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:44.724507    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:44.724533    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:44.724558    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:44.724606    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:44.727414    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:44.727430    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:44.727437    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:44.727442    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:44.727446    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:44.727450    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:44 GMT
	I0806 00:57:44.727455    5434 round_trippers.go:580]     Audit-Id: 61c9d604-08b6-44d8-af4a-18e3dce1db78
	I0806 00:57:44.727459    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:44.727462    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:44.727524    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:45.225103    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:45.225132    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:45.225145    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:45.225151    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:45.228092    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:45.228108    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:45.228116    5434 round_trippers.go:580]     Audit-Id: fa82d64a-5a25-4f13-beb3-16ddfa3bedb5
	I0806 00:57:45.228122    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:45.228126    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:45.228130    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:45.228134    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:45.228138    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:45.228142    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:45 GMT
	I0806 00:57:45.228204    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:45.228411    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:45.724009    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:45.724021    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:45.724027    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:45.724030    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:45.725579    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:45.725590    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:45.725615    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:45 GMT
	I0806 00:57:45.725626    5434 round_trippers.go:580]     Audit-Id: bb1e2183-350f-4b0d-b3a0-d525e6313f9d
	I0806 00:57:45.725630    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:45.725633    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:45.725652    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:45.725659    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:45.725663    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:45.725693    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:46.224012    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:46.224047    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:46.224057    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:46.224063    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:46.225437    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:46.225447    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:46.225452    5434 round_trippers.go:580]     Audit-Id: cb77a9df-a7b2-4a13-aabf-c2368fecaf1c
	I0806 00:57:46.225455    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:46.225458    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:46.225460    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:46.225463    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:46.225467    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:46.225469    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:46 GMT
	I0806 00:57:46.225517    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:46.725493    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:46.725529    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:46.725537    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:46.725542    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:46.727073    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:46.727083    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:46.727088    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:46.727091    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:46.727095    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:46.727098    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:46.727100    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:46.727104    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:46 GMT
	I0806 00:57:46.727107    5434 round_trippers.go:580]     Audit-Id: b8563860-357b-4864-a6e0-acd9dad98a47
	I0806 00:57:46.727198    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:47.224885    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:47.224941    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:47.224948    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:47.224952    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:47.226603    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:47.226615    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:47.226620    5434 round_trippers.go:580]     Audit-Id: e1431d28-11d5-4df9-a7ae-03ec8429670c
	I0806 00:57:47.226624    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:47.226626    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:47.226629    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:47.226631    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:47.226634    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:47.226636    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:47 GMT
	I0806 00:57:47.226709    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:47.726241    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:47.726269    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:47.726281    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:47.726296    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:47.728603    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:47.728625    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:47.728642    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:47.728653    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:47.728658    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:47.728662    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:47.728666    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:47 GMT
	I0806 00:57:47.728670    5434 round_trippers.go:580]     Audit-Id: 32992b15-0a16-485d-8662-9084c12f8e92
	I0806 00:57:47.728673    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:47.728738    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:47.728930    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:48.224302    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:48.224324    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:48.224335    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:48.224341    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:48.226950    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:48.226965    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:48.226977    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:48.226982    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:48.226985    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:48.227007    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:48.227013    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:48 GMT
	I0806 00:57:48.227017    5434 round_trippers.go:580]     Audit-Id: 54295014-f023-493a-b7b2-002d81b0a3f7
	I0806 00:57:48.227023    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:48.227096    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:48.724182    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:48.724203    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:48.724214    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:48.724221    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:48.726465    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:48.726486    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:48.726498    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:48.726533    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:48 GMT
	I0806 00:57:48.726542    5434 round_trippers.go:580]     Audit-Id: afcb1c88-7e72-4ed5-8c92-c8a6f6febea4
	I0806 00:57:48.726546    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:48.726549    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:48.726553    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:48.726633    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:49.224719    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:49.224748    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:49.224760    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:49.224765    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:49.227461    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:49.227480    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:49.227496    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:49.227502    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:49 GMT
	I0806 00:57:49.227506    5434 round_trippers.go:580]     Audit-Id: 12afd45f-a2ec-486f-8f90-4bb62c2151b4
	I0806 00:57:49.227511    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:49.227515    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:49.227518    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:49.227776    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:49.724496    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:49.724527    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:49.724539    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:49.724546    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:49.727213    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:49.727228    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:49.727235    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:49.727239    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:49.727243    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:49.727247    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:49 GMT
	I0806 00:57:49.727250    5434 round_trippers.go:580]     Audit-Id: 3261aa2b-ae3b-4565-98cd-3ebf59e0fd3b
	I0806 00:57:49.727255    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:49.727470    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:50.225030    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:50.225053    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:50.225065    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:50.225070    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:50.227951    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:50.227966    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:50.227984    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:50.228023    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:50 GMT
	I0806 00:57:50.228032    5434 round_trippers.go:580]     Audit-Id: 729a3da8-4502-4f59-8163-c1cbbf872830
	I0806 00:57:50.228036    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:50.228039    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:50.228043    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:50.228206    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:50.228417    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:50.725459    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:50.725481    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:50.725493    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:50.725526    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:50.728081    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:50.728097    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:50.728107    5434 round_trippers.go:580]     Audit-Id: 360b6392-fd29-4993-91b3-487e9f6775b0
	I0806 00:57:50.728115    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:50.728121    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:50.728126    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:50.728130    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:50.728133    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:50 GMT
	I0806 00:57:50.728374    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:51.224483    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:51.224588    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:51.224615    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:51.224619    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:51.227243    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:51.227254    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:51.227261    5434 round_trippers.go:580]     Audit-Id: b7499a85-98b8-4b7c-9a4f-31a58d18da1c
	I0806 00:57:51.227267    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:51.227270    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:51.227274    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:51.227279    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:51.227283    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:51 GMT
	I0806 00:57:51.227379    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:51.725230    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:51.725250    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:51.725261    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:51.725267    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:51.727905    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:51.727919    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:51.727926    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:51.727930    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:51.727933    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:51 GMT
	I0806 00:57:51.727937    5434 round_trippers.go:580]     Audit-Id: 9ea6e03d-81f1-44a3-89cf-bafa126532f8
	I0806 00:57:51.727941    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:51.727944    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:51.728077    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:52.224921    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:52.224946    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:52.224960    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:52.224967    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:52.227837    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:52.227852    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:52.227860    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:52.227864    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:52.227869    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:52 GMT
	I0806 00:57:52.227873    5434 round_trippers.go:580]     Audit-Id: 98fca0cf-aa76-4ea4-8522-7cd39d623570
	I0806 00:57:52.227877    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:52.227881    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:52.227942    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:52.725013    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:52.725037    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:52.725048    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:52.725060    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:52.727753    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:52.727771    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:52.727778    5434 round_trippers.go:580]     Audit-Id: 46dd3481-b051-4ab0-ae1c-95d8b0e02e35
	I0806 00:57:52.727788    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:52.727795    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:52.727800    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:52.727805    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:52.727810    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:52 GMT
	I0806 00:57:52.728341    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:52.728586    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:53.224625    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:53.224637    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:53.224643    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:53.224646    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:53.226506    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:53.226519    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:53.226524    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:53.226528    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:53.226531    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:53.226533    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:53.226535    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:53 GMT
	I0806 00:57:53.226537    5434 round_trippers.go:580]     Audit-Id: da8e8899-47b8-4b35-941d-db363ee18d6e
	I0806 00:57:53.226637    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:53.726051    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:53.726151    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:53.726167    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:53.726173    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:53.729063    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:53.729077    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:53.729085    5434 round_trippers.go:580]     Audit-Id: 3982aeac-6867-49a8-b6e8-84fffb7dbf4b
	I0806 00:57:53.729089    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:53.729092    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:53.729097    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:53.729100    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:53.729124    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:53 GMT
	I0806 00:57:53.729213    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:54.224926    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:54.224947    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:54.224960    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:54.224967    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:54.227074    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:54.227086    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:54.227096    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:54.227105    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:54.227112    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:54 GMT
	I0806 00:57:54.227118    5434 round_trippers.go:580]     Audit-Id: 8e5bfa48-c622-444e-aebd-1b2b8f7bfcaf
	I0806 00:57:54.227124    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:54.227128    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:54.227399    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:54.725678    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:54.725704    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:54.725717    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:54.725723    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:54.728089    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:54.728109    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:54.728121    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:54 GMT
	I0806 00:57:54.728127    5434 round_trippers.go:580]     Audit-Id: affee715-d54b-43e1-be9c-f0298de6368d
	I0806 00:57:54.728134    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:54.728138    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:54.728173    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:54.728182    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:54.728325    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:55.224533    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:55.224561    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:55.224607    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:55.224632    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:55.227613    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:55.227630    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:55.227638    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:55.227644    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:55 GMT
	I0806 00:57:55.227648    5434 round_trippers.go:580]     Audit-Id: fa358674-c49e-4645-b893-642d10c9b29b
	I0806 00:57:55.227651    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:55.227655    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:55.227658    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:55.227814    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:55.228025    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:55.724444    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:55.724466    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:55.724479    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:55.724485    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:55.726961    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:55.726977    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:55.726984    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:55.726988    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:55.727002    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:55 GMT
	I0806 00:57:55.727010    5434 round_trippers.go:580]     Audit-Id: bf6537d8-989b-4773-b7c1-952ad3e3597f
	I0806 00:57:55.727016    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:55.727020    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:55.727307    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:56.224462    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:56.224483    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:56.224495    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:56.224501    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:56.226706    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:56.226720    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:56.226728    5434 round_trippers.go:580]     Audit-Id: 40a6c958-d47f-4c5f-b662-81f57d85e731
	I0806 00:57:56.226732    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:56.226735    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:56.226738    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:56.226741    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:56.226745    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:56 GMT
	I0806 00:57:56.226812    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:56.724406    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:56.724428    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:56.724437    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:56.724443    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:56.726996    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:56.727008    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:56.727015    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:56.727080    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:56.727095    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:56.727101    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:56.727103    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:56 GMT
	I0806 00:57:56.727107    5434 round_trippers.go:580]     Audit-Id: f0ff7515-3524-4187-b078-2f0438d10e89
	I0806 00:57:56.727208    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:57.225306    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:57.225394    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:57.225409    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:57.225416    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:57.227979    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:57.227995    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:57.228003    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:57.228007    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:57.228010    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:57.228014    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:57 GMT
	I0806 00:57:57.228019    5434 round_trippers.go:580]     Audit-Id: df809995-94a1-4c0c-a430-17d60c9e7015
	I0806 00:57:57.228032    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:57.228205    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:57.228424    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:57.726019    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:57.726046    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:57.726059    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:57.726067    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:57.728773    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:57.728793    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:57.728800    5434 round_trippers.go:580]     Audit-Id: d753c800-6ae4-4a08-bbfe-c56afc9035aa
	I0806 00:57:57.728805    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:57.728819    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:57.728824    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:57.728828    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:57.728831    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:57 GMT
	I0806 00:57:57.728909    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:58.225669    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:58.225694    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.225705    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.225710    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.228253    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.228269    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.228276    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.228281    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.228285    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.228288    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.228293    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.228298    5434 round_trippers.go:580]     Audit-Id: f4a05e02-5c66-49dc-a953-8ed50c8e8f68
	I0806 00:57:58.228382    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:58.725444    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:58.725478    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.725557    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.725567    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.728312    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.728327    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.728334    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.728338    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.728343    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.728346    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.728350    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.728354    5434 round_trippers.go:580]     Audit-Id: e8e19a9c-bef2-4fd3-85fd-8c3d4d539afc
	I0806 00:57:58.728437    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1736","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3263 chars]
	I0806 00:57:58.728653    5434 node_ready.go:49] node "multinode-100000-m02" has status "Ready":"True"
	I0806 00:57:58.728664    5434 node_ready.go:38] duration metric: took 20.0045333s for node "multinode-100000-m02" to be "Ready" ...
	I0806 00:57:58.728672    5434 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:57:58.728719    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:57:58.728726    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.728733    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.728738    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.731131    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.731143    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.731150    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.731153    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.731157    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.731160    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.731164    5434 round_trippers.go:580]     Audit-Id: 1ea7890c-0d77-4a59-85dc-877fe634a3fd
	I0806 00:57:58.731166    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.732128    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1737"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86448 chars]
	I0806 00:57:58.734016    5434 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.734058    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:57:58.734063    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.734069    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.734074    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.735821    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.735830    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.735835    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.735839    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.735842    5434 round_trippers.go:580]     Audit-Id: 49535de9-ee43-41e0-a2b4-1e5382858f98
	I0806 00:57:58.735850    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.735854    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.735857    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.736046    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0806 00:57:58.736293    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.736300    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.736305    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.736309    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.737613    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.737620    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.737625    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.737636    5434 round_trippers.go:580]     Audit-Id: 1b9b523d-45a4-446f-a983-3cb9d55c7523
	I0806 00:57:58.737640    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.737643    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.737647    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.737650    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.737906    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.738086    5434 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.738094    5434 pod_ready.go:81] duration metric: took 4.068285ms for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.738102    5434 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.738134    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:57:58.738138    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.738144    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.738147    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.739543    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.739550    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.739560    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.739564    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.739568    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.739571    5434 round_trippers.go:580]     Audit-Id: efe03bb4-347e-4ec8-9d1a-5b437bd243dd
	I0806 00:57:58.739575    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.739581    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.739777    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1536","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0806 00:57:58.739989    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.739995    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.740001    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.740004    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.740979    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.740986    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.740990    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.740995    5434 round_trippers.go:580]     Audit-Id: 20653bf5-4fc8-450b-9f45-63adf08d2e0a
	I0806 00:57:58.740999    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.741004    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.741009    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.741014    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.741192    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.741360    5434 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.741368    5434 pod_ready.go:81] duration metric: took 3.260646ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.741378    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.741405    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:57:58.741410    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.741415    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.741419    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.742478    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.742487    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.742492    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.742496    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.742502    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.742507    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.742511    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.742514    5434 round_trippers.go:580]     Audit-Id: e0b1089a-46ca-44cf-8422-945411302001
	I0806 00:57:58.742620    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"1538","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0806 00:57:58.742857    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.742864    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.742870    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.742874    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.743732    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.743739    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.743744    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.743747    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.743751    5434 round_trippers.go:580]     Audit-Id: 78fc653a-6937-4ab0-a3db-331f4cec6452
	I0806 00:57:58.743754    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.743756    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.743759    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.743860    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.744027    5434 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.744034    5434 pod_ready.go:81] duration metric: took 2.65106ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.744040    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.744064    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:57:58.744068    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.744073    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.744077    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.744958    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.744964    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.744969    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.744973    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.744975    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.744979    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.744981    5434 round_trippers.go:580]     Audit-Id: de2c1c4a-2a16-4d10-a066-0020fb4f576d
	I0806 00:57:58.744984    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.745293    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"1546","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0806 00:57:58.745511    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.745517    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.745523    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.745526    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.746444    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.746451    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.746456    5434 round_trippers.go:580]     Audit-Id: 7be2dd48-70e6-4ea9-9d1c-69641cba744b
	I0806 00:57:58.746459    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.746466    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.746470    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.746472    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.746474    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.746583    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.746739    5434 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.746746    5434 pod_ready.go:81] duration metric: took 2.701845ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.746755    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.926258    5434 request.go:629] Waited for 179.453205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:57:58.926397    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:57:58.926409    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.926418    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.926434    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.929124    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.929141    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.929148    5434 round_trippers.go:580]     Audit-Id: 93e73643-103a-4b8c-9b99-e2cf305ff493
	I0806 00:57:58.929153    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.929157    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.929161    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.929164    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.929177    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:58.929292    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"1541","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0806 00:57:59.126390    5434 request.go:629] Waited for 196.764666ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:59.126455    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:59.126461    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.126467    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.126471    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.128204    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:59.128213    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.128217    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.128221    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.128224    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.128227    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.128230    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.128232    5434 round_trippers.go:580]     Audit-Id: 374278f2-b28e-4d4e-aec4-37bf681a998b
	I0806 00:57:59.128407    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:59.128605    5434 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:59.128614    5434 pod_ready.go:81] duration metric: took 381.847235ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.128621    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.326716    5434 request.go:629] Waited for 198.050124ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:57:59.326803    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:57:59.326813    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.326824    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.326836    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.329406    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.329426    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.329437    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.329450    5434 round_trippers.go:580]     Audit-Id: 978731a9-98d4-4f40-9430-2e4146495769
	I0806 00:57:59.329456    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.329462    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.329467    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.329473    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.329660    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d9c42","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe685526-4722-4113-b2b3-9a84182541b7","resourceVersion":"1590","creationTimestamp":"2024-08-06T07:52:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:52:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0806 00:57:59.526645    5434 request.go:629] Waited for 196.624715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:57:59.526768    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:57:59.526778    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.526790    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.526800    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.529319    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.529335    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.529346    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.529377    5434 round_trippers.go:580]     Audit-Id: 878f200e-3aba-4dd6-be77-d05bbbbb5647
	I0806 00:57:59.529388    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.529391    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.529394    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.529398    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.529479    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m03","uid":"3008e7de-9d1d-41e0-b794-0ab4c70ffeba","resourceVersion":"1602","creationTimestamp":"2024-08-06T07:53:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_53_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:53:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4567 chars]
	I0806 00:57:59.529736    5434 pod_ready.go:97] node "multinode-100000-m03" hosting pod "kube-proxy-d9c42" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000-m03" has status "Ready":"Unknown"
	I0806 00:57:59.529751    5434 pod_ready.go:81] duration metric: took 401.118154ms for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	E0806 00:57:59.529759    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000-m03" hosting pod "kube-proxy-d9c42" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000-m03" has status "Ready":"Unknown"
	I0806 00:57:59.529765    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xgwwm" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.726474    5434 request.go:629] Waited for 196.559763ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xgwwm
	I0806 00:57:59.726524    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xgwwm
	I0806 00:57:59.726532    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.726546    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.726556    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.729225    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.729241    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.729248    5434 round_trippers.go:580]     Audit-Id: f23d3e3e-304b-4e92-a2d6-49b4b22f01ce
	I0806 00:57:59.729252    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.729255    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.729261    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.729264    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.729267    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.729379    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xgwwm","generateName":"kube-proxy-","namespace":"kube-system","uid":"f4cdef35-1817-4fab-a6a2-0141da3bb973","resourceVersion":"1714","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0806 00:57:59.926618    5434 request.go:629] Waited for 196.817351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:59.926675    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:59.926685    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.926694    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.926700    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.929231    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.929249    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.929260    5434 round_trippers.go:580]     Audit-Id: 48142590-da71-4404-bedb-b74b0430c085
	I0806 00:57:59.929267    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.929271    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.929274    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.929279    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.929286    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:57:59.929372    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1741","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3143 chars]
	I0806 00:57:59.929584    5434 pod_ready.go:92] pod "kube-proxy-xgwwm" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:59.929594    5434 pod_ready.go:81] duration metric: took 399.813937ms for pod "kube-proxy-xgwwm" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.929602    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:58:00.126190    5434 request.go:629] Waited for 196.539569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:58:00.126333    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:58:00.126349    5434 round_trippers.go:469] Request Headers:
	I0806 00:58:00.126361    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:58:00.126373    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:58:00.128989    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:58:00.129010    5434 round_trippers.go:577] Response Headers:
	I0806 00:58:00.129018    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:58:00.129024    5434 round_trippers.go:580]     Audit-Id: 90a9b5e2-2d0e-4d7c-9e64-8e2b04889f34
	I0806 00:58:00.129028    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:58:00.129031    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:58:00.129035    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:58:00.129040    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:58:00.129165    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"1547","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0806 00:58:00.326311    5434 request.go:629] Waited for 196.774304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:58:00.326376    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:58:00.326385    5434 round_trippers.go:469] Request Headers:
	I0806 00:58:00.326396    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:58:00.326403    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:58:00.328549    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:58:00.328562    5434 round_trippers.go:577] Response Headers:
	I0806 00:58:00.328569    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:58:00.328574    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:58:00.328578    5434 round_trippers.go:580]     Audit-Id: 9a1da674-12e1-4deb-a889-e64176873f6e
	I0806 00:58:00.328583    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:58:00.328587    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:58:00.328593    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:58:00.328751    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:58:00.329008    5434 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:58:00.329019    5434 pod_ready.go:81] duration metric: took 399.403763ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:58:00.329029    5434 pod_ready.go:38] duration metric: took 1.600314666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:58:00.329048    5434 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:58:00.329107    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:58:00.339908    5434 system_svc.go:56] duration metric: took 10.859363ms WaitForService to wait for kubelet
	I0806 00:58:00.339921    5434 kubeadm.go:582] duration metric: took 21.805335392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:58:00.339933    5434 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:58:00.526323    5434 request.go:629] Waited for 186.348415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:58:00.526414    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:58:00.526423    5434 round_trippers.go:469] Request Headers:
	I0806 00:58:00.526431    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:58:00.526437    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:58:00.528924    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:58:00.528939    5434 round_trippers.go:577] Response Headers:
	I0806 00:58:00.528945    5434 round_trippers.go:580]     Audit-Id: c8982dd1-c794-4084-92fd-0f0afa65b0cf
	I0806 00:58:00.528948    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:58:00.528951    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:58:00.528953    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:58:00.528956    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:58:00.528959    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:58:00.529101    5434 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1741"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14922 chars]
	I0806 00:58:00.529494    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:58:00.529503    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:58:00.529510    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:58:00.529513    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:58:00.529516    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:58:00.529519    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:58:00.529527    5434 node_conditions.go:105] duration metric: took 189.582536ms to run NodePressure ...
	I0806 00:58:00.529536    5434 start.go:241] waiting for startup goroutines ...
	I0806 00:58:00.529554    5434 start.go:255] writing updated cluster config ...
	I0806 00:58:00.551443    5434 out.go:177] 
	I0806 00:58:00.572784    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:58:00.572913    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:58:00.594886    5434 out.go:177] * Starting "multinode-100000-m03" worker node in "multinode-100000" cluster
	I0806 00:58:00.653191    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:58:00.653231    5434 cache.go:56] Caching tarball of preloaded images
	I0806 00:58:00.653469    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:58:00.653489    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:58:00.653617    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:58:00.654431    5434 start.go:360] acquireMachinesLock for multinode-100000-m03: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:58:00.654556    5434 start.go:364] duration metric: took 100.571µs to acquireMachinesLock for "multinode-100000-m03"
	I0806 00:58:00.654582    5434 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:58:00.654590    5434 fix.go:54] fixHost starting: m03
	I0806 00:58:00.655008    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:58:00.655043    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:58:00.664436    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53133
	I0806 00:58:00.664809    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:58:00.665188    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:58:00.665205    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:58:00.665431    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:58:00.665577    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:00.665664    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:58:00.665753    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:58:00.665841    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:58:00.666774    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid 5220 missing from process table
	I0806 00:58:00.666801    5434 fix.go:112] recreateIfNeeded on multinode-100000-m03: state=Stopped err=<nil>
	I0806 00:58:00.666809    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	W0806 00:58:00.666891    5434 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:58:00.687912    5434 out.go:177] * Restarting existing hyperkit VM for "multinode-100000-m03" ...
	I0806 00:58:00.730015    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .Start
	I0806 00:58:00.730258    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:58:00.730287    5434 main.go:141] libmachine: (multinode-100000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid
	I0806 00:58:00.731560    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid 5220 missing from process table
	I0806 00:58:00.731574    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | pid 5220 is in state "Stopped"
	I0806 00:58:00.731586    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid...
	I0806 00:58:00.731958    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Using UUID 83a9a765-665a-44ea-930f-df1a6331c821
	I0806 00:58:00.756417    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Generated MAC 4e:ad:42:3:c5:ed
	I0806 00:58:00.756443    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:58:00.756606    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"83a9a765-665a-44ea-930f-df1a6331c821", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:58:00.756641    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"83a9a765-665a-44ea-930f-df1a6331c821", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:58:00.756701    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "83a9a765-665a-44ea-930f-df1a6331c821", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/multinode-100000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:58:00.756764    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 83a9a765-665a-44ea-930f-df1a6331c821 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/multinode-100000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:58:00.756783    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:58:00.758162    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Pid is 5554
	I0806 00:58:00.758623    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Attempt 0
	I0806 00:58:00.758640    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:58:00.758688    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5554
	I0806 00:58:00.760393    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Searching for 4e:ad:42:3:c5:ed in /var/db/dhcpd_leases ...
	I0806 00:58:00.760484    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0806 00:58:00.760502    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32880}
	I0806 00:58:00.760521    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b32856}
	I0806 00:58:00.760533    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b327da}
	I0806 00:58:00.760543    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Found match: 4e:ad:42:3:c5:ed
	I0806 00:58:00.760555    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | IP: 192.169.0.15
	I0806 00:58:00.760564    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetConfigRaw
	I0806 00:58:00.761447    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:58:00.761615    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:58:00.762092    5434 machine.go:94] provisionDockerMachine start ...
	I0806 00:58:00.762103    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:00.762222    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:00.762317    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:00.762411    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:00.762496    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:00.762578    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:00.762708    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:00.762879    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:00.762886    5434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:58:00.766147    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:58:00.775784    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:58:00.776767    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:58:00.776783    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:58:00.776790    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:58:00.776797    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:58:01.161002    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:58:01.161025    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:58:01.275830    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:58:01.275850    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:58:01.275864    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:58:01.275870    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:58:01.276688    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:58:01.276698    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:58:06.885456    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:58:06.885612    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:58:06.885621    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:58:06.909022    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:58:11.833120    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:58:11.833138    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetMachineName
	I0806 00:58:11.833293    5434 buildroot.go:166] provisioning hostname "multinode-100000-m03"
	I0806 00:58:11.833303    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetMachineName
	I0806 00:58:11.833405    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:11.833498    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:11.833582    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.833689    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.833790    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:11.833911    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:11.834050    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:11.834059    5434 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m03 && echo "multinode-100000-m03" | sudo tee /etc/hostname
	I0806 00:58:11.909385    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m03
	
	I0806 00:58:11.909402    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:11.909532    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:11.909633    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.909726    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.909812    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:11.909927    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:11.910056    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:11.910068    5434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:58:11.978753    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:58:11.978769    5434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:58:11.978782    5434 buildroot.go:174] setting up certificates
	I0806 00:58:11.978788    5434 provision.go:84] configureAuth start
	I0806 00:58:11.978795    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetMachineName
	I0806 00:58:11.978927    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:58:11.979051    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:11.979145    5434 provision.go:143] copyHostCerts
	I0806 00:58:11.979173    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:58:11.979233    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:58:11.979238    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:58:11.979398    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:58:11.979584    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:58:11.979625    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:58:11.979630    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:58:11.979731    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:58:11.979873    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:58:11.979922    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:58:11.979926    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:58:11.980034    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:58:11.980181    5434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m03 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-100000-m03]
	I0806 00:58:12.212453    5434 provision.go:177] copyRemoteCerts
	I0806 00:58:12.212501    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:58:12.212516    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.212656    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.212773    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.212873    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.212983    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:12.250946    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:58:12.251023    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:58:12.270862    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:58:12.270931    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:58:12.290936    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:58:12.291014    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:58:12.310605    5434 provision.go:87] duration metric: took 331.803225ms to configureAuth
	I0806 00:58:12.310617    5434 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:58:12.310775    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:58:12.310788    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:12.310925    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.311026    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.311114    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.311207    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.311295    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.311399    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:12.311527    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:12.311534    5434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:58:12.373876    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:58:12.373889    5434 buildroot.go:70] root file system type: tmpfs
	I0806 00:58:12.373965    5434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:58:12.373978    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.374107    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.374195    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.374282    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.374384    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.374498    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:12.374639    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:12.374689    5434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	Environment="NO_PROXY=192.169.0.13,192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:58:12.450794    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	Environment=NO_PROXY=192.169.0.13,192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:58:12.450811    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.450945    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.451041    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.451129    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.451221    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.451348    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:12.451495    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:12.451508    5434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:58:14.021658    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:58:14.021673    5434 machine.go:97] duration metric: took 13.259312225s to provisionDockerMachine
	I0806 00:58:14.021681    5434 start.go:293] postStartSetup for "multinode-100000-m03" (driver="hyperkit")
	I0806 00:58:14.021689    5434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:58:14.021699    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.021902    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:58:14.021916    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:14.022001    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.022086    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.022165    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.022256    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:14.066891    5434 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:58:14.071155    5434 command_runner.go:130] > NAME=Buildroot
	I0806 00:58:14.071166    5434 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:58:14.071170    5434 command_runner.go:130] > ID=buildroot
	I0806 00:58:14.071175    5434 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:58:14.071185    5434 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:58:14.071374    5434 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:58:14.071384    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:58:14.071488    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:58:14.071680    5434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:58:14.071686    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:58:14.071894    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:58:14.082409    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:58:14.110037    5434 start.go:296] duration metric: took 88.345962ms for postStartSetup
	I0806 00:58:14.110059    5434 fix.go:56] duration metric: took 13.455205562s for fixHost
	I0806 00:58:14.110075    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:14.110208    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.110294    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.110376    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.110467    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.110593    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:14.110732    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:14.110740    5434 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0806 00:58:14.176032    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722931094.071863234
	
	I0806 00:58:14.176045    5434 fix.go:216] guest clock: 1722931094.071863234
	I0806 00:58:14.176051    5434 fix.go:229] Guest: 2024-08-06 00:58:14.071863234 -0700 PDT Remote: 2024-08-06 00:58:14.110065 -0700 PDT m=+201.367961651 (delta=-38.201766ms)
	I0806 00:58:14.176061    5434 fix.go:200] guest clock delta is within tolerance: -38.201766ms
	I0806 00:58:14.176064    5434 start.go:83] releasing machines lock for "multinode-100000-m03", held for 13.521231837s
	I0806 00:58:14.176080    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.176208    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:58:14.199487    5434 out.go:177] * Found network options:
	I0806 00:58:14.220730    5434 out.go:177]   - NO_PROXY=192.169.0.13,192.169.0.14
	W0806 00:58:14.242504    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 00:58:14.242537    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:58:14.242557    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.243399    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.243765    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.243895    5434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:58:14.243942    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	W0806 00:58:14.244079    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 00:58:14.244143    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:58:14.244153    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.244266    5434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:58:14.244310    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:14.244321    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.244508    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.244531    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.244626    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.244705    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:14.244803    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.244937    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:14.279699    5434 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:58:14.279721    5434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:58:14.279776    5434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:58:14.330683    5434 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:58:14.330728    5434 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:58:14.330754    5434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:58:14.330765    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:58:14.330862    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:58:14.346086    5434 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:58:14.346420    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:58:14.355635    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:58:14.364513    5434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:58:14.364561    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:58:14.373312    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:58:14.382133    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:58:14.390853    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:58:14.399701    5434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:58:14.408827    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:58:14.417835    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:58:14.426935    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:58:14.435957    5434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:58:14.443786    5434 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:58:14.443882    5434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:58:14.452060    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:58:14.558715    5434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:58:14.578407    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:58:14.578477    5434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:58:14.597572    5434 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:58:14.598026    5434 command_runner.go:130] > [Unit]
	I0806 00:58:14.598038    5434 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:58:14.598047    5434 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:58:14.598052    5434 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:58:14.598057    5434 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:58:14.598061    5434 command_runner.go:130] > StartLimitBurst=3
	I0806 00:58:14.598064    5434 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:58:14.598067    5434 command_runner.go:130] > [Service]
	I0806 00:58:14.598070    5434 command_runner.go:130] > Type=notify
	I0806 00:58:14.598074    5434 command_runner.go:130] > Restart=on-failure
	I0806 00:58:14.598078    5434 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:58:14.598083    5434 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13,192.169.0.14
	I0806 00:58:14.598088    5434 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:58:14.598097    5434 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:58:14.598103    5434 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:58:14.598108    5434 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:58:14.598114    5434 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:58:14.598119    5434 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:58:14.598128    5434 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:58:14.598134    5434 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:58:14.598139    5434 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:58:14.598142    5434 command_runner.go:130] > ExecStart=
	I0806 00:58:14.598153    5434 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:58:14.598159    5434 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:58:14.598171    5434 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:58:14.598177    5434 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:58:14.598183    5434 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:58:14.598187    5434 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:58:14.598190    5434 command_runner.go:130] > LimitCORE=infinity
	I0806 00:58:14.598195    5434 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:58:14.598199    5434 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:58:14.598203    5434 command_runner.go:130] > TasksMax=infinity
	I0806 00:58:14.598206    5434 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:58:14.598212    5434 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:58:14.598215    5434 command_runner.go:130] > Delegate=yes
	I0806 00:58:14.598224    5434 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:58:14.598227    5434 command_runner.go:130] > KillMode=process
	I0806 00:58:14.598230    5434 command_runner.go:130] > [Install]
	I0806 00:58:14.598234    5434 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:58:14.598413    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:58:14.613420    5434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:58:14.629701    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:58:14.640859    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:58:14.651379    5434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:58:14.673305    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:58:14.683771    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:58:14.698544    5434 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:58:14.698791    5434 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:58:14.701580    5434 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:58:14.701750    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:58:14.708820    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:58:14.722421    5434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:58:14.815094    5434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:58:14.921962    5434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:58:14.921985    5434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:58:14.935838    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:58:15.032162    5434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:59:15.915566    5434 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:59:15.915581    5434 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:59:15.915774    5434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.88239916s)
	I0806 00:59:15.915839    5434 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:59:15.924740    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:59:15.924752    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620205375Z" level=info msg="Starting up"
	I0806 00:59:15.924760    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620885359Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:59:15.924774    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.621523310Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	I0806 00:59:15.924784    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.640436395Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:59:15.924794    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.655975062Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:59:15.924809    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656077313Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:59:15.924819    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656226951Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:59:15.924828    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656271270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924839    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656455891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924848    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656499131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924867    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656643262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924875    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656684025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924886    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656715615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924896    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656749714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924907    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656891585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924916    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.657087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924938    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658771254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924963    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658832185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.925011    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658977673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.925024    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659023792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:59:15.925034    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659168691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:59:15.925042    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659277517Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:59:15.925051    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660551911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:59:15.925060    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660601241Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:59:15.925068    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660615925Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:59:15.925078    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660625942Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:59:15.925086    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660642532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:59:15.925095    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660696000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:59:15.925104    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660982518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:59:15.925115    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661131769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:59:15.925124    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661166301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:59:15.925135    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661177824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:59:15.925145    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661187825Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925154    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661196606Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925163    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661205267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925172    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661214886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925181    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661224353Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925190    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661232684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925473    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661240709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925484    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661248870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925495    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661261839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925507    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661281648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925515    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661292789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925524    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661307256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925533    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661319953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925541    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661328979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925549    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661337898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925558    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661346271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925567    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661354564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925575    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661363681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925583    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661371351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925592    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661378844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925601    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661386749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925612    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661396961Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:59:15.925621    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661410260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925630    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661418222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925639    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661426102Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:59:15.925648    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661470594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:59:15.925660    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661510559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:59:15.925671    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661520945Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:59:15.925747    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661528992Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:59:15.925759    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661535663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925770    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661714555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:59:15.925778    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661749667Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:59:15.925785    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661938092Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:59:15.925793    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661996010Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:59:15.925802    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662029246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:59:15.925809    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662061316Z" level=info msg="containerd successfully booted in 0.022501s"
	I0806 00:59:15.925818    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.642985611Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:59:15.925825    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.656390226Z" level=info msg="Loading containers: start."
	I0806 00:59:15.925843    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.773927440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:59:15.925854    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.836164993Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0806 00:59:15.925866    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881102509Z" level=warning msg="error locating sandbox id 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1: sandbox 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1 not found"
	I0806 00:59:15.925876    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881237996Z" level=info msg="Loading containers: done."
	I0806 00:59:15.925885    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888707394Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:59:15.925893    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888862219Z" level=info msg="Daemon has completed initialization"
	I0806 00:59:15.925908    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.911008448Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:59:15.925915    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 systemd[1]: Started Docker Application Container Engine.
	I0806 00:59:15.925923    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.913462716Z" level=info msg="API listen on [::]:2376"
	I0806 00:59:15.925930    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.960059248Z" level=info msg="Processing signal 'terminated'"
	I0806 00:59:15.925940    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961027416Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:59:15.925948    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961153398Z" level=info msg="Daemon shutdown complete"
	I0806 00:59:15.925983    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961241454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:59:15.925991    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961276079Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:59:15.925997    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:59:15.926003    5434 command_runner.go:130] > Aug 06 07:58:15 multinode-100000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:59:15.926009    5434 command_runner.go:130] > Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:59:15.926015    5434 command_runner.go:130] > Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:59:15.926023    5434 command_runner.go:130] > Aug 06 07:58:16 multinode-100000-m03 dockerd[910]: time="2024-08-06T07:58:16.000826603Z" level=info msg="Starting up"
	I0806 00:59:15.926035    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:59:15.926044    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:59:15.926051    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:59:15.926056    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:59:15.950293    5434 out.go:177] 
	W0806 00:59:15.971310    5434 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:58:12 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620205375Z" level=info msg="Starting up"
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620885359Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.621523310Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.640436395Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.655975062Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656077313Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656226951Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656271270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656455891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656499131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656643262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656684025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656715615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656749714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656891585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.657087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658771254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658832185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658977673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659023792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659168691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659277517Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660551911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660601241Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660615925Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660625942Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660642532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660696000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660982518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661131769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661166301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661177824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661187825Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661196606Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661205267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661214886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661224353Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661232684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661240709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661248870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661261839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661281648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661292789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661307256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661319953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661328979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661337898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661346271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661354564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661363681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661371351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661378844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661386749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661396961Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661410260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661418222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661426102Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661470594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661510559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661520945Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661528992Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661535663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661714555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661749667Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661938092Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661996010Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662029246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662061316Z" level=info msg="containerd successfully booted in 0.022501s"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.642985611Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.656390226Z" level=info msg="Loading containers: start."
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.773927440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.836164993Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881102509Z" level=warning msg="error locating sandbox id 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1: sandbox 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1 not found"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881237996Z" level=info msg="Loading containers: done."
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888707394Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888862219Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.911008448Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:58:13 multinode-100000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.913462716Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.960059248Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961027416Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961153398Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961241454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961276079Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:58:14 multinode-100000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:58:16 multinode-100000-m03 dockerd[910]: time="2024-08-06T07:58:16.000826603Z" level=info msg="Starting up"
	Aug 06 07:59:16 multinode-100000-m03 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:58:12 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620205375Z" level=info msg="Starting up"
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620885359Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.621523310Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.640436395Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.655975062Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656077313Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656226951Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656271270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656455891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656499131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656643262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656684025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656715615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656749714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656891585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.657087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658771254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658832185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658977673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659023792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659168691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659277517Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660551911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660601241Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660615925Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660625942Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660642532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660696000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660982518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661131769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661166301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661177824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661187825Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661196606Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661205267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661214886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661224353Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661232684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661240709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661248870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661261839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661281648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661292789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661307256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661319953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661328979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661337898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661346271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661354564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661363681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661371351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661378844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661386749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661396961Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661410260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661418222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661426102Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661470594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661510559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661520945Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661528992Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661535663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661714555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661749667Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661938092Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661996010Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662029246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662061316Z" level=info msg="containerd successfully booted in 0.022501s"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.642985611Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.656390226Z" level=info msg="Loading containers: start."
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.773927440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.836164993Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881102509Z" level=warning msg="error locating sandbox id 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1: sandbox 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1 not found"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881237996Z" level=info msg="Loading containers: done."
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888707394Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888862219Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.911008448Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:58:13 multinode-100000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.913462716Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.960059248Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961027416Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961153398Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961241454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961276079Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:58:14 multinode-100000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:58:16 multinode-100000-m03 dockerd[910]: time="2024-08-06T07:58:16.000826603Z" level=info msg="Starting up"
	Aug 06 07:59:16 multinode-100000-m03 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:59:15.971426    5434 out.go:239] * 
	* 
	W0806 00:59:15.972635    5434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:59:16.034481    5434 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-100000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-100000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-100000 -n multinode-100000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 logs -n 25: (2.900750046s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:50 PDT | 06 Aug 24 00:50 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2 -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- get pods -o   | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT |                     |
	|         | busybox-fc5497c4f-6l7f2              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-100000 -- exec          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:51 PDT |
	|         | busybox-fc5497c4f-dzbn7 -- sh        |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |                  |         |         |                     |                     |
	| node    | add -p multinode-100000 -v 3         | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:51 PDT | 06 Aug 24 00:52 PDT |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-100000 node stop m03       | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:52 PDT |
	| node    | multinode-100000 node start          | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:52 PDT | 06 Aug 24 00:53 PDT |
	|         | m03 -v=7 --alsologtostderr           |                  |         |         |                     |                     |
	| node    | list -p multinode-100000             | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:54 PDT |                     |
	| stop    | -p multinode-100000                  | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:54 PDT | 06 Aug 24 00:54 PDT |
	| start   | -p multinode-100000                  | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:54 PDT |                     |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-100000             | multinode-100000 | jenkins | v1.33.1 | 06 Aug 24 00:59 PDT |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:54:52
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:54:52.775291    5434 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:54:52.775561    5434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:52.775566    5434 out.go:304] Setting ErrFile to fd 2...
	I0806 00:54:52.775570    5434 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:54:52.775723    5434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:54:52.777331    5434 out.go:298] Setting JSON to false
	I0806 00:54:52.799866    5434 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3254,"bootTime":1722927638,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:54:52.799957    5434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:54:52.822010    5434 out.go:177] * [multinode-100000] minikube v1.33.1 on Darwin 14.5
	I0806 00:54:52.864712    5434 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:54:52.864770    5434 notify.go:220] Checking for updates...
	I0806 00:54:52.907409    5434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:54:52.928567    5434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:54:52.949610    5434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:54:52.970563    5434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:54:52.991585    5434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:54:53.013277    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:54:53.013490    5434 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:54:53.014138    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:53.014217    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:53.023954    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53066
	I0806 00:54:53.024306    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:53.024759    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:54:53.024773    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:53.025048    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:53.025203    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:53.053365    5434 out.go:177] * Using the hyperkit driver based on existing profile
	I0806 00:54:53.074587    5434 start.go:297] selected driver: hyperkit
	I0806 00:54:53.074644    5434 start.go:901] validating driver "hyperkit" against &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:54:53.074889    5434 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:54:53.075080    5434 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:54:53.075282    5434 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:54:53.084939    5434 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:54:53.088779    5434 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:53.088814    5434 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:54:53.091507    5434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:54:53.091564    5434 cni.go:84] Creating CNI manager for ""
	I0806 00:54:53.091573    5434 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:54:53.091658    5434 start.go:340] cluster config:
	{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:54:53.091766    5434 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:54:53.133253    5434 out.go:177] * Starting "multinode-100000" primary control-plane node in "multinode-100000" cluster
	I0806 00:54:53.154509    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:54:53.154586    5434 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:54:53.154619    5434 cache.go:56] Caching tarball of preloaded images
	I0806 00:54:53.154820    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:54:53.154837    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:54:53.155029    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:54:53.155979    5434 start.go:360] acquireMachinesLock for multinode-100000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:54:53.156123    5434 start.go:364] duration metric: took 115.218µs to acquireMachinesLock for "multinode-100000"
	I0806 00:54:53.156179    5434 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:54:53.156190    5434 fix.go:54] fixHost starting: 
	I0806 00:54:53.156488    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:54:53.156518    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:54:53.165726    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53068
	I0806 00:54:53.166104    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:54:53.166447    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:54:53.166459    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:54:53.166680    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:54:53.166799    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:53.166912    5434 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:54:53.167000    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:53.167075    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 4303
	I0806 00:54:53.167993    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid 4303 missing from process table
	I0806 00:54:53.168044    5434 fix.go:112] recreateIfNeeded on multinode-100000: state=Stopped err=<nil>
	I0806 00:54:53.168068    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	W0806 00:54:53.168161    5434 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:54:53.210553    5434 out.go:177] * Restarting existing hyperkit VM for "multinode-100000" ...
	I0806 00:54:53.233510    5434 main.go:141] libmachine: (multinode-100000) Calling .Start
	I0806 00:54:53.233779    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:53.233830    5434 main.go:141] libmachine: (multinode-100000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid
	I0806 00:54:53.235587    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid 4303 missing from process table
	I0806 00:54:53.235601    5434 main.go:141] libmachine: (multinode-100000) DBG | pid 4303 is in state "Stopped"
	I0806 00:54:53.235624    5434 main.go:141] libmachine: (multinode-100000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid...
	I0806 00:54:53.235833    5434 main.go:141] libmachine: (multinode-100000) DBG | Using UUID 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848
	I0806 00:54:53.349771    5434 main.go:141] libmachine: (multinode-100000) DBG | Generated MAC 1a:eb:5b:3:28:91
	I0806 00:54:53.349804    5434 main.go:141] libmachine: (multinode-100000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:54:53.349923    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b87e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:54:53.349949    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b87e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(
nil)}
	I0806 00:54:53.350000    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/
.minikube/machines/multinode-100000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:54:53.350046    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d6de1a4-25d9-49b5-bb0f-6ea8b6ad2848 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/multinode-100000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/initrd,earlyprintk=serial
loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:54:53.350064    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:54:53.351421    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 DEBUG: hyperkit: Pid is 5446
	I0806 00:54:53.351799    5434 main.go:141] libmachine: (multinode-100000) DBG | Attempt 0
	I0806 00:54:53.351809    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:54:53.351891    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:54:53.353820    5434 main.go:141] libmachine: (multinode-100000) DBG | Searching for 1a:eb:5b:3:28:91 in /var/db/dhcpd_leases ...
	I0806 00:54:53.353926    5434 main.go:141] libmachine: (multinode-100000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0806 00:54:53.353945    5434 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b327da}
	I0806 00:54:53.353958    5434 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:54:53.353969    5434 main.go:141] libmachine: (multinode-100000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b323cf}
	I0806 00:54:53.353976    5434 main.go:141] libmachine: (multinode-100000) DBG | Found match: 1a:eb:5b:3:28:91
	I0806 00:54:53.353983    5434 main.go:141] libmachine: (multinode-100000) DBG | IP: 192.169.0.13
	I0806 00:54:53.354064    5434 main.go:141] libmachine: (multinode-100000) Calling .GetConfigRaw
	I0806 00:54:53.354774    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:54:53.355023    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:54:53.355524    5434 machine.go:94] provisionDockerMachine start ...
	I0806 00:54:53.355536    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:54:53.355691    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:54:53.355814    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:54:53.355925    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:54:53.356036    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:54:53.356154    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:54:53.356323    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:54:53.356521    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:54:53.356533    5434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:54:53.359935    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:54:53.411612    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:54:53.412320    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:54:53.412339    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:54:53.412346    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:54:53.412355    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:54:53.793354    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:54:53.793370    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:54:53.907960    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:54:53.907981    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:54:53.907996    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:54:53.908005    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:54:53.908869    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:54:53.908882    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:54:59.470791    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:54:59.470906    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:54:59.470916    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:54:59.495324    5434 main.go:141] libmachine: (multinode-100000) DBG | 2024/08/06 00:54:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:55:04.433190    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:55:04.433204    5434 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:55:04.433414    5434 buildroot.go:166] provisioning hostname "multinode-100000"
	I0806 00:55:04.433426    5434 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:55:04.433525    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.433619    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.433715    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.433824    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.433936    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.434099    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.434280    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.434302    5434 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000 && echo "multinode-100000" | sudo tee /etc/hostname
	I0806 00:55:04.510650    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000
	
	I0806 00:55:04.510671    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.510814    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.510917    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.511009    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.511103    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.511218    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.511376    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.511388    5434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:55:04.581815    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:55:04.581856    5434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:55:04.581885    5434 buildroot.go:174] setting up certificates
	I0806 00:55:04.581892    5434 provision.go:84] configureAuth start
	I0806 00:55:04.581900    5434 main.go:141] libmachine: (multinode-100000) Calling .GetMachineName
	I0806 00:55:04.582032    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:55:04.582112    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.582198    5434 provision.go:143] copyHostCerts
	I0806 00:55:04.582227    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:04.582303    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:55:04.582311    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:04.582460    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:55:04.582669    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:04.582710    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:55:04.582715    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:04.582803    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:55:04.582953    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:04.582994    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:55:04.582999    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:04.583086    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:55:04.583248    5434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-100000]
	I0806 00:55:04.712424    5434 provision.go:177] copyRemoteCerts
	I0806 00:55:04.712483    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:55:04.712499    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.712641    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.712739    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.712831    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.712916    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:04.750794    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:55:04.750868    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:55:04.771056    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:55:04.771110    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0806 00:55:04.790705    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:55:04.790769    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:55:04.810426    5434 provision.go:87] duration metric: took 228.51549ms to configureAuth
	I0806 00:55:04.810439    5434 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:55:04.810605    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:04.810620    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:04.810754    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.810848    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.810933    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.811014    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.811089    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.811201    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.811331    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.811339    5434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:55:04.876926    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:55:04.876938    5434 buildroot.go:70] root file system type: tmpfs
	I0806 00:55:04.877025    5434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:55:04.877040    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.877182    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.877280    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.877378    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.877466    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.877597    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.877740    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.877784    5434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:55:04.953206    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:55:04.953225    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:04.953377    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:04.953483    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.953589    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:04.953690    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:04.953819    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:04.953957    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:04.953970    5434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:55:06.623296    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:55:06.623311    5434 machine.go:97] duration metric: took 13.267517182s to provisionDockerMachine
	I0806 00:55:06.623323    5434 start.go:293] postStartSetup for "multinode-100000" (driver="hyperkit")
	I0806 00:55:06.623330    5434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:55:06.623347    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.623540    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:55:06.623553    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.623643    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.623737    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.623841    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.623952    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:06.668104    5434 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:55:06.671469    5434 command_runner.go:130] > NAME=Buildroot
	I0806 00:55:06.671477    5434 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:55:06.671481    5434 command_runner.go:130] > ID=buildroot
	I0806 00:55:06.671485    5434 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:55:06.671488    5434 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:55:06.671619    5434 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:55:06.671630    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:55:06.671730    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:55:06.671922    5434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:55:06.671928    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:55:06.672134    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:55:06.682041    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:06.712670    5434 start.go:296] duration metric: took 89.337079ms for postStartSetup
	I0806 00:55:06.712696    5434 fix.go:56] duration metric: took 13.556242885s for fixHost
	I0806 00:55:06.712709    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.712842    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.712939    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.713031    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.713121    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.713260    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:06.713404    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0806 00:55:06.713411    5434 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:55:06.779050    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930906.844084403
	
	I0806 00:55:06.779062    5434 fix.go:216] guest clock: 1722930906.844084403
	I0806 00:55:06.779068    5434 fix.go:229] Guest: 2024-08-06 00:55:06.844084403 -0700 PDT Remote: 2024-08-06 00:55:06.712699 -0700 PDT m=+13.974282859 (delta=131.385403ms)
	I0806 00:55:06.779083    5434 fix.go:200] guest clock delta is within tolerance: 131.385403ms
	I0806 00:55:06.779088    5434 start.go:83] releasing machines lock for "multinode-100000", held for 13.622685085s
	I0806 00:55:06.779108    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779243    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:55:06.779354    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779683    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779782    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:06.779886    5434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:55:06.779913    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.779957    5434 ssh_runner.go:195] Run: cat /version.json
	I0806 00:55:06.779977    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:06.780040    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.780076    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:06.780159    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.780196    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:06.780314    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.780331    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:06.780402    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:06.780430    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:06.862442    5434 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:55:06.862500    5434 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0806 00:55:06.862677    5434 ssh_runner.go:195] Run: systemctl --version
	I0806 00:55:06.867604    5434 command_runner.go:130] > systemd 252 (252)
	I0806 00:55:06.867628    5434 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0806 00:55:06.867839    5434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:55:06.872017    5434 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:55:06.872077    5434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:55:06.872121    5434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:55:06.885766    5434 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:55:06.885848    5434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:55:06.885861    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:06.885952    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:06.900629    5434 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:55:06.900887    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:55:06.909937    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:55:06.918880    5434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:55:06.918922    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:55:06.927993    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:06.936831    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:55:06.945909    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:06.954813    5434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:55:06.963998    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:55:06.972888    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:55:06.981863    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:55:06.990782    5434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:55:06.998891    5434 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:55:06.999023    5434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:55:07.008442    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:07.111172    5434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:55:07.129602    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:07.129681    5434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:55:07.146741    5434 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:55:07.147296    5434 command_runner.go:130] > [Unit]
	I0806 00:55:07.147306    5434 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:55:07.147311    5434 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:55:07.147316    5434 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:55:07.147321    5434 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:55:07.147341    5434 command_runner.go:130] > StartLimitBurst=3
	I0806 00:55:07.147347    5434 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:55:07.147351    5434 command_runner.go:130] > [Service]
	I0806 00:55:07.147354    5434 command_runner.go:130] > Type=notify
	I0806 00:55:07.147358    5434 command_runner.go:130] > Restart=on-failure
	I0806 00:55:07.147363    5434 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:55:07.147370    5434 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:55:07.147376    5434 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:55:07.147382    5434 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:55:07.147388    5434 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:55:07.147392    5434 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:55:07.147398    5434 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:55:07.147414    5434 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:55:07.147421    5434 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:55:07.147428    5434 command_runner.go:130] > ExecStart=
	I0806 00:55:07.147440    5434 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:55:07.147445    5434 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:55:07.147452    5434 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:55:07.147458    5434 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:55:07.147462    5434 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:55:07.147466    5434 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:55:07.147478    5434 command_runner.go:130] > LimitCORE=infinity
	I0806 00:55:07.147483    5434 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:55:07.147488    5434 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:55:07.147493    5434 command_runner.go:130] > TasksMax=infinity
	I0806 00:55:07.147498    5434 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:55:07.147510    5434 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:55:07.147518    5434 command_runner.go:130] > Delegate=yes
	I0806 00:55:07.147526    5434 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:55:07.147536    5434 command_runner.go:130] > KillMode=process
	I0806 00:55:07.147540    5434 command_runner.go:130] > [Install]
	I0806 00:55:07.147551    5434 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:55:07.147629    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:07.159343    5434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:55:07.174076    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:07.185284    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:07.196345    5434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:55:07.220943    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:07.232200    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:07.246532    5434 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:55:07.246763    5434 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:55:07.249395    5434 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:55:07.249601    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:55:07.256709    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:55:07.270264    5434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:55:07.373249    5434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:55:07.470581    5434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:55:07.470656    5434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:55:07.484033    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:07.585356    5434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:55:09.915028    5434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.329607446s)
	I0806 00:55:09.915085    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:55:09.926762    5434 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:55:09.941366    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:09.953827    5434 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:55:10.050234    5434 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:55:10.161226    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:10.271189    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:55:10.284569    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:10.295807    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:10.407189    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:55:10.463062    5434 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:55:10.463140    5434 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:55:10.467071    5434 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:55:10.467082    5434 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:55:10.467086    5434 command_runner.go:130] > Device: 0,22	Inode: 753         Links: 1
	I0806 00:55:10.467091    5434 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:55:10.467096    5434 command_runner.go:130] > Access: 2024-08-06 07:55:10.485303147 +0000
	I0806 00:55:10.467101    5434 command_runner.go:130] > Modify: 2024-08-06 07:55:10.485303147 +0000
	I0806 00:55:10.467106    5434 command_runner.go:130] > Change: 2024-08-06 07:55:10.486303006 +0000
	I0806 00:55:10.467111    5434 command_runner.go:130] >  Birth: -
	I0806 00:55:10.467303    5434 start.go:563] Will wait 60s for crictl version
	I0806 00:55:10.467344    5434 ssh_runner.go:195] Run: which crictl
	I0806 00:55:10.470189    5434 command_runner.go:130] > /usr/bin/crictl
	I0806 00:55:10.470513    5434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:55:10.499752    5434 command_runner.go:130] > Version:  0.1.0
	I0806 00:55:10.499767    5434 command_runner.go:130] > RuntimeName:  docker
	I0806 00:55:10.499770    5434 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:55:10.499774    5434 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:55:10.500795    5434 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:55:10.500863    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:10.517201    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:10.518128    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:10.535554    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:10.579645    5434 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:55:10.579691    5434 main.go:141] libmachine: (multinode-100000) Calling .GetIP
	I0806 00:55:10.580056    5434 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:55:10.584485    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:10.594933    5434 kubeadm.go:883] updating cluster {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0806 00:55:10.595035    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:55:10.595090    5434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:55:10.607660    5434 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0806 00:55:10.607674    5434 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:55:10.607678    5434 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:55:10.607682    5434 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:55:10.607686    5434 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:55:10.607690    5434 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:55:10.607694    5434 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:55:10.607711    5434 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:55:10.607716    5434 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:55:10.607720    5434 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0806 00:55:10.609002    5434 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0806 00:55:10.609015    5434 docker.go:615] Images already preloaded, skipping extraction
	I0806 00:55:10.609085    5434 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0806 00:55:10.620324    5434 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0806 00:55:10.620345    5434 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0806 00:55:10.620349    5434 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0806 00:55:10.620354    5434 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0806 00:55:10.620358    5434 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0806 00:55:10.620362    5434 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0806 00:55:10.620366    5434 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0806 00:55:10.620370    5434 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0806 00:55:10.620375    5434 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 00:55:10.620379    5434 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0806 00:55:10.620837    5434 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0806 00:55:10.620857    5434 cache_images.go:84] Images are preloaded, skipping loading
	I0806 00:55:10.620870    5434 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0806 00:55:10.620947    5434 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:55:10.621028    5434 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0806 00:55:10.656828    5434 command_runner.go:130] > cgroupfs
	I0806 00:55:10.657651    5434 cni.go:84] Creating CNI manager for ""
	I0806 00:55:10.657667    5434 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:55:10.657680    5434 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0806 00:55:10.657699    5434 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-100000 NodeName:multinode-100000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0806 00:55:10.657785    5434 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-100000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0806 00:55:10.657836    5434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:55:10.666311    5434 command_runner.go:130] > kubeadm
	I0806 00:55:10.666320    5434 command_runner.go:130] > kubectl
	I0806 00:55:10.666324    5434 command_runner.go:130] > kubelet
	I0806 00:55:10.666336    5434 binaries.go:44] Found k8s binaries, skipping transfer
	I0806 00:55:10.666376    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0806 00:55:10.674599    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0806 00:55:10.688184    5434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:55:10.701466    5434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0806 00:55:10.715212    5434 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:55:10.717953    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:10.727893    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:10.820115    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:55:10.832903    5434 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.13
	I0806 00:55:10.832915    5434 certs.go:194] generating shared ca certs ...
	I0806 00:55:10.832929    5434 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:10.833128    5434 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:55:10.833206    5434 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:55:10.833216    5434 certs.go:256] generating profile certs ...
	I0806 00:55:10.833328    5434 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key
	I0806 00:55:10.833415    5434 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key.de816dec
	I0806 00:55:10.833485    5434 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key
	I0806 00:55:10.833492    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:55:10.833513    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:55:10.833532    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:55:10.833551    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:55:10.833568    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0806 00:55:10.833598    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0806 00:55:10.833629    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0806 00:55:10.833648    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0806 00:55:10.833756    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:55:10.833801    5434 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:55:10.833808    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:55:10.833839    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:55:10.833872    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:55:10.833906    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:55:10.833974    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:10.834010    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:55:10.834032    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:10.834049    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:55:10.834498    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:55:10.864424    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:55:10.891260    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:55:10.914747    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:55:10.943675    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0806 00:55:10.965018    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0806 00:55:10.984529    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0806 00:55:11.003871    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0806 00:55:11.023031    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:55:11.042125    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:55:11.061390    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:55:11.080969    5434 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0806 00:55:11.094345    5434 ssh_runner.go:195] Run: openssl version
	I0806 00:55:11.098214    5434 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:55:11.098412    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:55:11.107460    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.110604    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.110750    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.110787    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:11.114716    5434 command_runner.go:130] > b5213941
	I0806 00:55:11.114958    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:55:11.123886    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:55:11.132800    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.135908    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.135951    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.135985    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:55:11.139937    5434 command_runner.go:130] > 51391683
	I0806 00:55:11.140149    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:55:11.149071    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:55:11.157866    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.161027    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.161128    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.161162    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:55:11.165060    5434 command_runner.go:130] > 3ec20f2e
	I0806 00:55:11.165263    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:55:11.174094    5434 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:55:11.177167    5434 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:55:11.177181    5434 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0806 00:55:11.177187    5434 command_runner.go:130] > Device: 253,1	Inode: 531528      Links: 1
	I0806 00:55:11.177192    5434 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:55:11.177197    5434 command_runner.go:130] > Access: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177201    5434 command_runner.go:130] > Modify: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177207    5434 command_runner.go:130] > Change: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177212    5434 command_runner.go:130] >  Birth: 2024-08-06 07:37:53.344202328 +0000
	I0806 00:55:11.177350    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0806 00:55:11.181436    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.181604    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0806 00:55:11.185540    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.185693    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0806 00:55:11.189793    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.189985    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0806 00:55:11.193916    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.194116    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0806 00:55:11.198028    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.198231    5434 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0806 00:55:11.202137    5434 command_runner.go:130] > Certificate will not expire
	I0806 00:55:11.202319    5434 kubeadm.go:392] StartCluster: {Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:11.202443    5434 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:55:11.215188    5434 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0806 00:55:11.223263    5434 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0806 00:55:11.223276    5434 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0806 00:55:11.223283    5434 command_runner.go:130] > /var/lib/minikube/etcd:
	I0806 00:55:11.223302    5434 command_runner.go:130] > member
	I0806 00:55:11.223406    5434 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0806 00:55:11.223415    5434 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0806 00:55:11.223453    5434 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0806 00:55:11.231409    5434 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:55:11.231732    5434 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-100000" does not appear in /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:11.231826    5434 kubeconfig.go:62] /Users/jenkins/minikube-integration/19370-944/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-100000" cluster setting kubeconfig missing "multinode-100000" context setting]
	I0806 00:55:11.232022    5434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:11.232670    5434 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:11.232876    5434 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1231e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:55:11.233199    5434 cert_rotation.go:137] Starting client certificate rotation controller
	I0806 00:55:11.233368    5434 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0806 00:55:11.241354    5434 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0806 00:55:11.241372    5434 kubeadm.go:1160] stopping kube-system containers ...
	I0806 00:55:11.241430    5434 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0806 00:55:11.255536    5434 command_runner.go:130] > 4a58bc5cb9c3
	I0806 00:55:11.255549    5434 command_runner.go:130] > 47e0c0c6895e
	I0806 00:55:11.255553    5434 command_runner.go:130] > 5fae897eca5b
	I0806 00:55:11.255555    5434 command_runner.go:130] > ea5bc31c5483
	I0806 00:55:11.255559    5434 command_runner.go:130] > ca21c7b20c75
	I0806 00:55:11.255562    5434 command_runner.go:130] > 10a202844745
	I0806 00:55:11.255566    5434 command_runner.go:130] > 6bbb2ed0b308
	I0806 00:55:11.255570    5434 command_runner.go:130] > 731b397a827b
	I0806 00:55:11.255573    5434 command_runner.go:130] > 09c41cba0052
	I0806 00:55:11.255576    5434 command_runner.go:130] > b60a8dd0efa5
	I0806 00:55:11.255580    5434 command_runner.go:130] > 6d93185f30a9
	I0806 00:55:11.255600    5434 command_runner.go:130] > e6892e6b325e
	I0806 00:55:11.255605    5434 command_runner.go:130] > d20d569460ea
	I0806 00:55:11.255608    5434 command_runner.go:130] > 8cca7996d392
	I0806 00:55:11.255611    5434 command_runner.go:130] > bde71375b0e4
	I0806 00:55:11.255614    5434 command_runner.go:130] > 94cf07fa5ddc
	I0806 00:55:11.256218    5434 docker.go:483] Stopping containers: [4a58bc5cb9c3 47e0c0c6895e 5fae897eca5b ea5bc31c5483 ca21c7b20c75 10a202844745 6bbb2ed0b308 731b397a827b 09c41cba0052 b60a8dd0efa5 6d93185f30a9 e6892e6b325e d20d569460ea 8cca7996d392 bde71375b0e4 94cf07fa5ddc]
	I0806 00:55:11.256286    5434 ssh_runner.go:195] Run: docker stop 4a58bc5cb9c3 47e0c0c6895e 5fae897eca5b ea5bc31c5483 ca21c7b20c75 10a202844745 6bbb2ed0b308 731b397a827b 09c41cba0052 b60a8dd0efa5 6d93185f30a9 e6892e6b325e d20d569460ea 8cca7996d392 bde71375b0e4 94cf07fa5ddc
	I0806 00:55:11.268129    5434 command_runner.go:130] > 4a58bc5cb9c3
	I0806 00:55:11.268511    5434 command_runner.go:130] > 47e0c0c6895e
	I0806 00:55:11.268518    5434 command_runner.go:130] > 5fae897eca5b
	I0806 00:55:11.269754    5434 command_runner.go:130] > ea5bc31c5483
	I0806 00:55:11.269760    5434 command_runner.go:130] > ca21c7b20c75
	I0806 00:55:11.269763    5434 command_runner.go:130] > 10a202844745
	I0806 00:55:11.269767    5434 command_runner.go:130] > 6bbb2ed0b308
	I0806 00:55:11.269780    5434 command_runner.go:130] > 731b397a827b
	I0806 00:55:11.269785    5434 command_runner.go:130] > 09c41cba0052
	I0806 00:55:11.269788    5434 command_runner.go:130] > b60a8dd0efa5
	I0806 00:55:11.270315    5434 command_runner.go:130] > 6d93185f30a9
	I0806 00:55:11.270323    5434 command_runner.go:130] > e6892e6b325e
	I0806 00:55:11.270532    5434 command_runner.go:130] > d20d569460ea
	I0806 00:55:11.270538    5434 command_runner.go:130] > 8cca7996d392
	I0806 00:55:11.270541    5434 command_runner.go:130] > bde71375b0e4
	I0806 00:55:11.270544    5434 command_runner.go:130] > 94cf07fa5ddc
	I0806 00:55:11.271328    5434 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0806 00:55:11.284221    5434 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0806 00:55:11.292278    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0806 00:55:11.292297    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0806 00:55:11.292304    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0806 00:55:11.292324    5434 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:55:11.292402    5434 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0806 00:55:11.292413    5434 kubeadm.go:157] found existing configuration files:
	
	I0806 00:55:11.292449    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0806 00:55:11.300196    5434 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:55:11.300213    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0806 00:55:11.300249    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0806 00:55:11.308035    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0806 00:55:11.315574    5434 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:55:11.315591    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0806 00:55:11.315627    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0806 00:55:11.323528    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0806 00:55:11.330930    5434 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:55:11.330949    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0806 00:55:11.330983    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0806 00:55:11.338702    5434 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0806 00:55:11.346009    5434 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:55:11.346164    5434 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0806 00:55:11.346198    5434 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0806 00:55:11.354219    5434 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0806 00:55:11.362075    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:11.434757    5434 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0806 00:55:11.434770    5434 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0806 00:55:11.434775    5434 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0806 00:55:11.434780    5434 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0806 00:55:11.434789    5434 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0806 00:55:11.434795    5434 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0806 00:55:11.434800    5434 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0806 00:55:11.434806    5434 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0806 00:55:11.434813    5434 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0806 00:55:11.434823    5434 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0806 00:55:11.434829    5434 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0806 00:55:11.434833    5434 command_runner.go:130] > [certs] Using the existing "sa" key
	I0806 00:55:11.434846    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:11.472110    5434 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0806 00:55:11.703561    5434 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0806 00:55:11.896147    5434 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0806 00:55:12.067020    5434 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0806 00:55:12.205169    5434 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0806 00:55:12.503640    5434 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0806 00:55:12.505818    5434 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.070936358s)
	I0806 00:55:12.505831    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:12.559506    5434 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:55:12.559522    5434 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:55:12.559526    5434 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:55:12.662923    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:12.717182    5434 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0806 00:55:12.717196    5434 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0806 00:55:12.718956    5434 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0806 00:55:12.719502    5434 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0806 00:55:12.721168    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:12.793262    5434 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0806 00:55:12.801338    5434 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:55:12.801401    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:13.302705    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:13.801616    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:13.813958    5434 command_runner.go:130] > 1781
	I0806 00:55:13.814003    5434 api_server.go:72] duration metric: took 1.01265181s to wait for apiserver process to appear ...
	I0806 00:55:13.814011    5434 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:55:13.814027    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:16.347202    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 00:55:16.347218    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 00:55:16.347226    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:16.392636    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0806 00:55:16.392652    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0806 00:55:16.814908    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:16.825473    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 00:55:16.825491    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 00:55:17.314170    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:17.318884    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0806 00:55:17.318899    5434 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0806 00:55:17.814354    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:17.818288    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:55:17.818355    5434 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:55:17.818361    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:17.818368    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:17.818371    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:17.823335    5434 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0806 00:55:17.823346    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:17.823351    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:17.823354    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:17.823357    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:17.823359    5434 round_trippers.go:580]     Content-Length: 263
	I0806 00:55:17.823362    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:17 GMT
	I0806 00:55:17.823365    5434 round_trippers.go:580]     Audit-Id: 7135051e-b726-47d5-a200-f2d12032ef14
	I0806 00:55:17.823368    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:17.823389    5434 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:55:17.823431    5434 api_server.go:141] control plane version: v1.30.3
	I0806 00:55:17.823441    5434 api_server.go:131] duration metric: took 4.009346825s to wait for apiserver health ...
	I0806 00:55:17.823448    5434 cni.go:84] Creating CNI manager for ""
	I0806 00:55:17.823451    5434 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0806 00:55:17.844296    5434 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0806 00:55:17.866393    5434 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 00:55:17.872058    5434 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0806 00:55:17.872069    5434 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0806 00:55:17.872100    5434 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0806 00:55:17.872129    5434 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0806 00:55:17.872137    5434 command_runner.go:130] > Access: 2024-08-06 07:55:03.988856323 +0000
	I0806 00:55:17.872142    5434 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0806 00:55:17.872146    5434 command_runner.go:130] > Change: 2024-08-06 07:55:01.454930767 +0000
	I0806 00:55:17.872149    5434 command_runner.go:130] >  Birth: -
	I0806 00:55:17.872222    5434 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 00:55:17.872231    5434 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0806 00:55:17.887537    5434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 00:55:18.233164    5434 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0806 00:55:18.245992    5434 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0806 00:55:18.309665    5434 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0806 00:55:18.352694    5434 command_runner.go:130] > daemonset.apps/kindnet configured
	I0806 00:55:18.354227    5434 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:55:18.354308    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:18.354315    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.354322    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.354326    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.356655    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:18.356668    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.356676    5434 round_trippers.go:580]     Audit-Id: 2991c079-ff2b-41b9-b1df-dd8b701947e3
	I0806 00:55:18.356682    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.356688    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.356692    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.356696    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.356701    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.357370    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1423"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0806 00:55:18.361239    5434 system_pods.go:59] 10 kube-system pods found
	I0806 00:55:18.361259    5434 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0806 00:55:18.361264    5434 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0806 00:55:18.361269    5434 system_pods.go:61] "kindnet-dn72w" [34a2c1f4-38da-4e95-8d44-d2eae75e5dcb] Running
	I0806 00:55:18.361285    5434 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:55:18.361295    5434 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0806 00:55:18.361301    5434 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0806 00:55:18.361307    5434 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:55:18.361310    5434 system_pods.go:61] "kube-proxy-d9c42" [fe685526-4722-4113-b2b3-9a84182541b7] Running
	I0806 00:55:18.361315    5434 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0806 00:55:18.361318    5434 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:55:18.361323    5434 system_pods.go:74] duration metric: took 7.088649ms to wait for pod list to return data ...
	I0806 00:55:18.361331    5434 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:55:18.361366    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:55:18.361371    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.361377    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.361382    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.362937    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.362946    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.362951    5434 round_trippers.go:580]     Audit-Id: f2956865-fa14-407b-9a6f-c187433e5c48
	I0806 00:55:18.362956    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.362958    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.362961    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.362963    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.362966    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.363144    5434 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1423"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0806 00:55:18.363584    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:18.363598    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:18.363612    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:18.363617    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:18.363620    5434 node_conditions.go:105] duration metric: took 2.285564ms to run NodePressure ...
	I0806 00:55:18.363630    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0806 00:55:18.465445    5434 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0806 00:55:18.619573    5434 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0806 00:55:18.620797    5434 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0806 00:55:18.620897    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0806 00:55:18.620908    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.620916    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.620933    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.622688    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.622703    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.622711    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.622716    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.622721    5434 round_trippers.go:580]     Audit-Id: 4c64a921-516a-4271-826d-6e9af481f0ee
	I0806 00:55:18.622725    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.622739    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.622748    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.623132    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1425"},"items":[{"metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1410","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30917 chars]
	I0806 00:55:18.623869    5434 kubeadm.go:739] kubelet initialised
	I0806 00:55:18.623879    5434 kubeadm.go:740] duration metric: took 3.065796ms waiting for restarted kubelet to initialise ...
	I0806 00:55:18.623886    5434 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:18.623919    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:18.623925    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.623930    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.623934    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.625655    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.625662    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.625667    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.625671    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.625673    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.625675    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.625677    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.625679    5434 round_trippers.go:580]     Audit-Id: 54fe049e-2496-412e-8bf9-6980782498d1
	I0806 00:55:18.626717    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1425"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0806 00:55:18.628343    5434 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.628387    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:18.628392    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.628409    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.628415    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.629588    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:18.629607    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.629617    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.629623    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.629630    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.629643    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.629650    5434 round_trippers.go:580]     Audit-Id: 9ae58c9e-38cc-4d0c-9097-8381a2972b06
	I0806 00:55:18.629653    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.629757    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:18.630007    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.630014    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.630020    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.630024    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.631033    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.631042    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.631049    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.631055    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.631061    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.631067    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.631069    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.631076    5434 round_trippers.go:580]     Audit-Id: 5da8eb8d-907f-423e-9741-1304c63aac04
	I0806 00:55:18.631208    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.631394    5434 pod_ready.go:97] node "multinode-100000" hosting pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.631404    5434 pod_ready.go:81] duration metric: took 3.05173ms for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.631410    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.631417    5434 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.631450    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:55:18.631455    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.631460    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.631464    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.632332    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.632342    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.632356    5434 round_trippers.go:580]     Audit-Id: 54a52952-9d42-450d-8231-0b11106f9607
	I0806 00:55:18.632363    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.632367    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.632369    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.632371    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.632376    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.632599    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1410","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0806 00:55:18.632795    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.632802    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.632808    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.632812    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.633675    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.633681    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.633686    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.633689    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.633691    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.633693    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.633696    5434 round_trippers.go:580]     Audit-Id: ce930a97-7ef4-41b0-861f-6ee9e9ecdedc
	I0806 00:55:18.633700    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.633807    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.633989    5434 pod_ready.go:97] node "multinode-100000" hosting pod "etcd-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.633997    5434 pod_ready.go:81] duration metric: took 2.576204ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.634003    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "etcd-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.634013    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.634047    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:55:18.634051    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.634056    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.634060    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.635009    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.635016    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.635021    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.635029    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.635032    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.635035    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.635039    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.635042    5434 round_trippers.go:580]     Audit-Id: e825c9c7-f114-4531-be9a-248fd14f9459
	I0806 00:55:18.635226    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"1414","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0806 00:55:18.635463    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.635470    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.635476    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.635480    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.636292    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.636298    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.636303    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.636306    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.636309    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.636312    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.636314    5434 round_trippers.go:580]     Audit-Id: 2be5fb5e-74fa-4c4e-949a-fbca588eb68f
	I0806 00:55:18.636317    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.636503    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.636669    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-apiserver-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.636680    5434 pod_ready.go:81] duration metric: took 2.660039ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.636687    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-apiserver-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.636692    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.636726    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:55:18.636731    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.636737    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.636741    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.637642    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:18.637648    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.637652    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.637655    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.637658    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.637660    5434 round_trippers.go:580]     Audit-Id: edd31c10-32fe-4cc6-a258-36887d0ea7c0
	I0806 00:55:18.637662    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.637665    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.637798    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"1415","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0806 00:55:18.755614    5434 request.go:629] Waited for 117.467135ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.755664    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:18.755674    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.755684    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.755692    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.758404    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:18.758429    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.758437    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.758440    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.758444    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.758447    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:18 GMT
	I0806 00:55:18.758450    5434 round_trippers.go:580]     Audit-Id: af6f8e30-1dca-44ac-8578-33584ad0edf5
	I0806 00:55:18.758454    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.758542    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:18.758793    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-controller-manager-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.758808    5434 pod_ready.go:81] duration metric: took 122.106804ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:18.758816    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-controller-manager-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:18.758821    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:18.955181    5434 request.go:629] Waited for 196.311166ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:55:18.955337    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:55:18.955348    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:18.955358    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:18.955366    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:18.958400    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:18.958415    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:18.958425    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:18.958430    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:18.958434    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:18.958440    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:18.958446    5434 round_trippers.go:580]     Audit-Id: fdf94cbc-8c9f-4c29-a9b2-d4cd8da861c7
	I0806 00:55:18.958473    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:18.958934    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"1421","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0806 00:55:19.156544    5434 request.go:629] Waited for 197.171014ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.156603    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.156614    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.156625    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.156633    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.160037    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:19.160055    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.160062    5434 round_trippers.go:580]     Audit-Id: 82e01bb1-a559-4c93-bc5a-36ad03799626
	I0806 00:55:19.160067    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.160072    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.160076    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.160080    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.160083    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.160403    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:19.160658    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-proxy-crsrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.160675    5434 pod_ready.go:81] duration metric: took 401.836129ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:19.160684    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-proxy-crsrr" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.160691    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:19.355243    5434 request.go:629] Waited for 194.498904ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:19.355294    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:19.355303    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.355315    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.355391    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.358093    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.358107    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.358114    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.358119    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.358122    5434 round_trippers.go:580]     Audit-Id: cfff1d7b-c2df-4e8e-900e-e15fd07ebae4
	I0806 00:55:19.358127    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.358132    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.358135    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.358647    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d9c42","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe685526-4722-4113-b2b3-9a84182541b7","resourceVersion":"1300","creationTimestamp":"2024-08-06T07:52:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:52:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0806 00:55:19.554787    5434 request.go:629] Waited for 195.789836ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:19.554930    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:19.554938    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.554953    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.554961    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.557592    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.557606    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.557614    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.557618    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.557622    5434 round_trippers.go:580]     Audit-Id: fc423048-eadc-4e3d-838a-5bb5420a7872
	I0806 00:55:19.557625    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.557629    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.557633    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.557814    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m03","uid":"3008e7de-9d1d-41e0-b794-0ab4c70ffeba","resourceVersion":"1326","creationTimestamp":"2024-08-06T07:53:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_53_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:53:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0806 00:55:19.558036    5434 pod_ready.go:92] pod "kube-proxy-d9c42" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:19.558048    5434 pod_ready.go:81] duration metric: took 397.342388ms for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:19.558056    5434 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:19.756400    5434 request.go:629] Waited for 198.278845ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:19.756502    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:19.756512    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.756524    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.756530    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.759100    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.759114    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.759120    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.759129    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.759134    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:19 GMT
	I0806 00:55:19.759138    5434 round_trippers.go:580]     Audit-Id: 756a7ec3-521f-4c8f-b571-4f454c539bae
	I0806 00:55:19.759142    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.759145    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.759475    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"1416","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0806 00:55:19.954705    5434 request.go:629] Waited for 194.852458ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.954757    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:19.954765    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:19.954777    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:19.954784    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:19.957479    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:19.957495    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:19.957502    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:20 GMT
	I0806 00:55:19.957524    5434 round_trippers.go:580]     Audit-Id: b6ccac5a-1a70-456e-887f-92b77e90d08a
	I0806 00:55:19.957533    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:19.957537    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:19.957542    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:19.957546    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:19.957636    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:19.957890    5434 pod_ready.go:97] node "multinode-100000" hosting pod "kube-scheduler-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.957903    5434 pod_ready.go:81] duration metric: took 399.83274ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	E0806 00:55:19.957911    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000" hosting pod "kube-scheduler-multinode-100000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000" has status "Ready":"False"
	I0806 00:55:19.957918    5434 pod_ready.go:38] duration metric: took 1.333999093s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:19.957935    5434 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 00:55:19.968369    5434 command_runner.go:130] > -16
	I0806 00:55:19.968420    5434 ops.go:34] apiserver oom_adj: -16
	I0806 00:55:19.968427    5434 kubeadm.go:597] duration metric: took 8.744836312s to restartPrimaryControlPlane
	I0806 00:55:19.968433    5434 kubeadm.go:394] duration metric: took 8.765947423s to StartCluster
	I0806 00:55:19.968442    5434 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:19.968529    5434 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:19.968882    5434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:19.969192    5434 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 00:55:19.969203    5434 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 00:55:19.969323    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:20.011380    5434 out.go:177] * Verifying Kubernetes components...
	I0806 00:55:20.053491    5434 out.go:177] * Enabled addons: 
	I0806 00:55:20.074236    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:20.095069    5434 addons.go:510] duration metric: took 125.861598ms for enable addons: enabled=[]
	I0806 00:55:20.212476    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:55:20.224008    5434 node_ready.go:35] waiting up to 6m0s for node "multinode-100000" to be "Ready" ...
	I0806 00:55:20.224067    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:20.224072    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:20.224078    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:20.224080    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:20.225422    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:20.225431    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:20.225436    5434 round_trippers.go:580]     Audit-Id: bdea768a-0771-4f81-aaf0-72fff444e818
	I0806 00:55:20.225441    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:20.225444    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:20.225447    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:20.225449    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:20.225452    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:20 GMT
	I0806 00:55:20.225608    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:20.724322    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:20.724338    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:20.724343    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:20.724346    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:20.725688    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:20.725697    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:20.725702    5434 round_trippers.go:580]     Audit-Id: 47529c6c-2b8f-42fa-bbdc-49f1a87bfa63
	I0806 00:55:20.725705    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:20.725714    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:20.725718    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:20.725723    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:20.725725    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:20 GMT
	I0806 00:55:20.726105    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:21.226129    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:21.226178    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:21.226198    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:21.226204    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:21.228432    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:21.228446    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:21.228456    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:21.228489    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:21.228499    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:21 GMT
	I0806 00:55:21.228503    5434 round_trippers.go:580]     Audit-Id: 940861b1-1bee-4669-a633-b78c51fb0e01
	I0806 00:55:21.228507    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:21.228509    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:21.228685    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:21.724884    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:21.724905    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:21.724917    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:21.724923    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:21.727212    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:21.727229    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:21.727239    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:21.727246    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:21 GMT
	I0806 00:55:21.727270    5434 round_trippers.go:580]     Audit-Id: 39b8c42d-1a15-4b85-a44d-54efa33d7b3c
	I0806 00:55:21.727277    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:21.727281    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:21.727285    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:21.727451    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:22.224395    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:22.224413    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:22.224441    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:22.224446    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:22.226043    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:22.226056    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:22.226062    5434 round_trippers.go:580]     Audit-Id: c8e91ed2-2430-4518-8a7e-297131509505
	I0806 00:55:22.226068    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:22.226072    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:22.226082    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:22.226085    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:22.226096    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:22 GMT
	I0806 00:55:22.226168    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:22.226394    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:22.724217    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:22.724231    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:22.724238    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:22.724241    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:22.725884    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:22.725893    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:22.725898    5434 round_trippers.go:580]     Audit-Id: 8984b90e-4f21-48a0-9922-aa383b02e2e4
	I0806 00:55:22.725901    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:22.725904    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:22.725906    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:22.725921    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:22.725928    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:22 GMT
	I0806 00:55:22.726006    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:23.226197    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:23.226213    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:23.226222    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:23.226227    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:23.228077    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:23.228091    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:23.228098    5434 round_trippers.go:580]     Audit-Id: 05da540f-2de9-4c2b-a831-86d6b1a0af0c
	I0806 00:55:23.228104    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:23.228109    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:23.228113    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:23.228116    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:23.228121    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:23 GMT
	I0806 00:55:23.228329    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:23.724616    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:23.724634    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:23.724641    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:23.724647    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:23.726561    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:23.726570    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:23.726575    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:23.726579    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:23.726582    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:23.726585    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:23.726589    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:23 GMT
	I0806 00:55:23.726592    5434 round_trippers.go:580]     Audit-Id: 717421be-9914-41af-87cd-548074beffe0
	I0806 00:55:23.726870    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:24.224496    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:24.224520    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:24.224530    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:24.224536    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:24.227165    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:24.227180    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:24.227187    5434 round_trippers.go:580]     Audit-Id: d59bf722-b6ed-4856-97db-eeadf233cae4
	I0806 00:55:24.227193    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:24.227197    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:24.227203    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:24.227206    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:24.227211    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:24 GMT
	I0806 00:55:24.227590    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:24.227843    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:24.724366    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:24.724387    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:24.724399    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:24.724405    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:24.726945    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:24.726958    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:24.726968    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:24.726975    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:24.726980    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:24 GMT
	I0806 00:55:24.726985    5434 round_trippers.go:580]     Audit-Id: 8e42586d-311e-4466-b4e7-937ae9d22140
	I0806 00:55:24.726997    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:24.727002    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:24.727083    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:25.224290    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:25.224302    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:25.224308    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:25.224311    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:25.225919    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:25.225934    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:25.225942    5434 round_trippers.go:580]     Audit-Id: dbeceab5-b466-4bcf-927f-aa8125cf10e4
	I0806 00:55:25.225948    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:25.225954    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:25.225958    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:25.225961    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:25.225964    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:25 GMT
	I0806 00:55:25.226109    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:25.724558    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:25.724580    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:25.724592    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:25.724597    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:25.727186    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:25.727202    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:25.727209    5434 round_trippers.go:580]     Audit-Id: b4b2e1be-2bcf-4130-b3e7-f3cd59e84c3a
	I0806 00:55:25.727215    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:25.727219    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:25.727222    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:25.727226    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:25.727233    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:25 GMT
	I0806 00:55:25.727351    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:26.225182    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:26.225208    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:26.225220    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:26.225228    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:26.227610    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:26.227622    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:26.227630    5434 round_trippers.go:580]     Audit-Id: 2c486001-af75-4c3e-873b-f0aa48805906
	I0806 00:55:26.227634    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:26.227638    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:26.227641    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:26.227647    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:26.227651    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:26 GMT
	I0806 00:55:26.227868    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:26.228123    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:26.724703    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:26.724727    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:26.724738    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:26.724744    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:26.726496    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:26.726519    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:26.726533    5434 round_trippers.go:580]     Audit-Id: 1b3f8dbf-e2f2-4f76-bf76-cf65ddb488eb
	I0806 00:55:26.726549    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:26.726560    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:26.726565    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:26.726568    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:26.726589    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:26 GMT
	I0806 00:55:26.726842    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:27.226408    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:27.226444    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:27.226545    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:27.226554    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:27.228994    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:27.229009    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:27.229017    5434 round_trippers.go:580]     Audit-Id: 4a536acc-e002-4c95-a24d-c96441616539
	I0806 00:55:27.229025    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:27.229031    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:27.229038    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:27.229043    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:27.229049    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:27 GMT
	I0806 00:55:27.229272    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:27.725397    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:27.725417    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:27.725424    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:27.725428    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:27.727162    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:27.727172    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:27.727177    5434 round_trippers.go:580]     Audit-Id: efd01aab-f01c-4df6-8fd1-0677a66eabbb
	I0806 00:55:27.727180    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:27.727184    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:27.727187    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:27.727190    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:27.727192    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:27 GMT
	I0806 00:55:27.727269    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:28.225138    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:28.225166    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:28.225178    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:28.225184    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:28.228317    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:28.228332    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:28.228339    5434 round_trippers.go:580]     Audit-Id: 1920522c-27e2-428b-b9b0-32dfc742e256
	I0806 00:55:28.228349    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:28.228358    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:28.228364    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:28.228372    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:28.228379    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:28 GMT
	I0806 00:55:28.228805    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:28.229044    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:28.725143    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:28.725164    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:28.725175    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:28.725183    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:28.727602    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:28.727614    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:28.727622    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:28.727625    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:28.727629    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:28 GMT
	I0806 00:55:28.727634    5434 round_trippers.go:580]     Audit-Id: a229cf93-a9eb-4122-af98-feb607626cde
	I0806 00:55:28.727640    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:28.727646    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:28.727860    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:29.226062    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:29.226087    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:29.226100    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:29.226107    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:29.228452    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:29.228469    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:29.228479    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:29 GMT
	I0806 00:55:29.228486    5434 round_trippers.go:580]     Audit-Id: e691e116-e3f5-42b3-bb33-372df33e535e
	I0806 00:55:29.228493    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:29.228497    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:29.228502    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:29.228505    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:29.228645    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1409","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0806 00:55:29.724986    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:29.725011    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:29.725022    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:29.725030    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:29.727169    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:29.727183    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:29.727190    5434 round_trippers.go:580]     Audit-Id: b81b78a4-94f1-448e-91f8-4f23fa3af150
	I0806 00:55:29.727195    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:29.727201    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:29.727207    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:29.727210    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:29.727214    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:29 GMT
	I0806 00:55:29.727520    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1525","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0806 00:55:30.225653    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:30.225742    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.225756    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.225762    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.228535    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:30.228548    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.228555    5434 round_trippers.go:580]     Audit-Id: bdacd861-bd2b-4cf6-a7d0-225972f6913b
	I0806 00:55:30.228561    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.228566    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.228571    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.228580    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.228584    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.228987    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1525","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0806 00:55:30.229242    5434 node_ready.go:53] node "multinode-100000" has status "Ready":"False"
	I0806 00:55:30.726557    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:30.726579    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.726588    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.726595    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.729268    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:30.729283    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.729290    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.729294    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.729297    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.729300    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.729303    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.729307    5434 round_trippers.go:580]     Audit-Id: c0bd55a9-f230-44de-82ce-98cc873c9c5b
	I0806 00:55:30.729453    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:30.729709    5434 node_ready.go:49] node "multinode-100000" has status "Ready":"True"
	I0806 00:55:30.729725    5434 node_ready.go:38] duration metric: took 10.505490137s for node "multinode-100000" to be "Ready" ...
	I0806 00:55:30.729734    5434 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:30.729775    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:30.729784    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.729791    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.729795    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.732583    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:30.732591    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.732596    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.732614    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.732620    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.732622    5434 round_trippers.go:580]     Audit-Id: 1220cc10-8f83-4c12-ba78-927ce112b5f3
	I0806 00:55:30.732625    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.732627    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.734117    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1534"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73675 chars]
	I0806 00:55:30.735649    5434 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:30.735685    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:30.735690    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.735704    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.735708    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.736915    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:30.736924    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.736929    5434 round_trippers.go:580]     Audit-Id: d3507374-ba87-4836-9b87-4357a5f97dc7
	I0806 00:55:30.736945    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.736950    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.736952    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.736954    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.736959    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.737082    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:30.737316    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:30.737323    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:30.737328    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:30.737332    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:30.738368    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:30.738376    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:30.738381    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:30.738395    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:30.738412    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:30.738420    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:30.738423    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:30 GMT
	I0806 00:55:30.738425    5434 round_trippers.go:580]     Audit-Id: 808a8e75-27d6-42e8-b896-4d2236f6bef9
	I0806 00:55:30.738497    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:31.236013    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:31.236035    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.236046    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.236054    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.238975    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:31.238989    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.238996    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.239000    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.239003    5434 round_trippers.go:580]     Audit-Id: 391ad135-3678-429d-802e-74a7765536c8
	I0806 00:55:31.239006    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.239018    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.239022    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.239147    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:31.239529    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:31.239539    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.239546    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.239556    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.240953    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:31.240959    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.240963    5434 round_trippers.go:580]     Audit-Id: 19464df1-e3ea-45f2-94dd-4cb9f0465a30
	I0806 00:55:31.240966    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.240971    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.240976    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.240979    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.240983    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.241167    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:31.736051    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:31.736072    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.736084    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.736091    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.737509    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:31.737522    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.737528    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.737532    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.737535    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.737539    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.737542    5434 round_trippers.go:580]     Audit-Id: 5099062f-8493-491e-a4a1-1c46865d67f0
	I0806 00:55:31.737544    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.737757    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:31.738027    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:31.738033    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:31.738039    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:31.738043    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:31.739188    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:31.739201    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:31.739208    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:31.739212    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:31.739214    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:31.739216    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:31.739218    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:31 GMT
	I0806 00:55:31.739221    5434 round_trippers.go:580]     Audit-Id: a820394a-4ac7-4381-ae5a-0ee548fc3466
	I0806 00:55:31.739417    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:32.236101    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:32.236129    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.236143    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.236152    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.238744    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:32.238759    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.238767    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.238771    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.238775    5434 round_trippers.go:580]     Audit-Id: b0ceeb15-7922-46f6-99b5-dac683aa46d7
	I0806 00:55:32.238779    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.238782    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.238804    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.238999    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:32.239388    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:32.239400    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.239408    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.239412    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.241071    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:32.241081    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.241088    5434 round_trippers.go:580]     Audit-Id: ab6be580-fa10-424a-9c1a-381050be71b5
	I0806 00:55:32.241093    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.241098    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.241101    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.241110    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.241114    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.241537    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:32.735995    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:32.736007    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.736012    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.736016    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.737431    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:32.737439    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.737444    5434 round_trippers.go:580]     Audit-Id: 382f9f72-ab4a-4536-b24d-b2b3e2e59685
	I0806 00:55:32.737447    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.737450    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.737454    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.737456    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.737459    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.737766    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1411","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0806 00:55:32.738042    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:32.738049    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:32.738054    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:32.738059    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:32.739904    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:32.739913    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:32.739918    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:32.739924    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:32 GMT
	I0806 00:55:32.739930    5434 round_trippers.go:580]     Audit-Id: 55e12fe6-f961-453a-b23d-5fb17f5439e1
	I0806 00:55:32.739933    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:32.739938    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:32.739940    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:32.740006    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:32.740180    5434 pod_ready.go:102] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"False"
	I0806 00:55:33.236669    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:33.236689    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.236698    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.236704    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.239178    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:33.239191    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.239199    5434 round_trippers.go:580]     Audit-Id: 0a4356f5-75a2-4335-a2cc-66b2668d8196
	I0806 00:55:33.239205    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.239209    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.239213    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.239216    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.239223    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.239427    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1555","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7013 chars]
	I0806 00:55:33.239784    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:33.239794    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.239815    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.239819    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.240879    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:33.240887    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.240894    5434 round_trippers.go:580]     Audit-Id: 2369d202-002f-40d2-aceb-e1666583ff99
	I0806 00:55:33.240900    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.240905    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.240909    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.240926    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.240933    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.241144    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:33.736234    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:33.736268    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.736280    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.736286    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.738886    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:33.738898    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.738905    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.738910    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.738914    5434 round_trippers.go:580]     Audit-Id: 4f34e4e0-6fab-4bbf-a5a7-b9c6b54a911c
	I0806 00:55:33.738917    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.738921    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.738924    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.739384    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1555","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 7013 chars]
	I0806 00:55:33.739745    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:33.739754    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:33.739762    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:33.739768    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:33.740922    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:33.740932    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:33.740939    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:33.740958    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:33.740963    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:33.740966    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:33.740968    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:33 GMT
	I0806 00:55:33.740971    5434 round_trippers.go:580]     Audit-Id: dd1dcf4b-68af-4cac-a6ac-01ce652573be
	I0806 00:55:33.741127    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.236513    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:55:34.236536    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.236544    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.236551    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.239043    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:34.239057    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.239067    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.239073    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.239079    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.239096    5434 round_trippers.go:580]     Audit-Id: 183d64ec-69e1-479e-9a26-adaaae7a199d
	I0806 00:55:34.239104    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.239107    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.239237    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0806 00:55:34.239594    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.239603    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.239618    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.239625    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.241058    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.241065    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.241089    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.241119    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.241127    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.241135    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.241139    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.241142    5434 round_trippers.go:580]     Audit-Id: ff2044d7-5235-4b3a-89c1-75ac1ce9e438
	I0806 00:55:34.241223    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.241398    5434 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.241406    5434 pod_ready.go:81] duration metric: took 3.505678748s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.241413    5434 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.241441    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:55:34.241446    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.241451    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.241454    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.242496    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.242506    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.242514    5434 round_trippers.go:580]     Audit-Id: fb7bf067-5387-43ef-ad33-3a7388ee70ff
	I0806 00:55:34.242524    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.242527    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.242530    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.242532    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.242535    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.242649    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1536","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0806 00:55:34.242856    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.242863    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.242868    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.242872    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.243888    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.243896    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.243903    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.243906    5434 round_trippers.go:580]     Audit-Id: 71bcf85b-3031-49f2-87ff-bd80d7924d53
	I0806 00:55:34.243909    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.243912    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.243915    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.243918    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.244143    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.244309    5434 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.244317    5434 pod_ready.go:81] duration metric: took 2.898686ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.244325    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.244354    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:55:34.244359    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.244365    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.244369    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.245259    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:34.245266    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.245271    5434 round_trippers.go:580]     Audit-Id: 847e525d-9412-44ea-9956-67c962f6c612
	I0806 00:55:34.245274    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.245278    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.245281    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.245286    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.245289    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.245479    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"1538","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0806 00:55:34.245715    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.245722    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.245727    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.245731    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.246730    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:34.246738    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.246745    5434 round_trippers.go:580]     Audit-Id: e82f2502-2ac5-4a7b-9ae0-232b9e6b9705
	I0806 00:55:34.246750    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.246754    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.246758    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.246760    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.246762    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.246958    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.247119    5434 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.247126    5434 pod_ready.go:81] duration metric: took 2.79564ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.247138    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.247163    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:55:34.247167    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.247173    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.247177    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.248305    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.248316    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.248323    5434 round_trippers.go:580]     Audit-Id: 4a3a41e2-c462-4a3b-950f-498e978d7010
	I0806 00:55:34.248329    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.248332    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.248335    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.248338    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.248341    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.248548    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"1546","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0806 00:55:34.248779    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.248786    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.248792    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.248797    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.249891    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.249899    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.249905    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.249911    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.249916    5434 round_trippers.go:580]     Audit-Id: 428671cd-e6b7-4d7e-b95d-c3318369f09a
	I0806 00:55:34.249925    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.249928    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.249930    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.250080    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.250236    5434 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.250243    5434 pod_ready.go:81] duration metric: took 3.098846ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.250252    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.250275    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:55:34.250280    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.250285    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.250290    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.251318    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.251326    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.251331    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.251335    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.251339    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.251342    5434 round_trippers.go:580]     Audit-Id: f2525451-85fe-4e99-a438-f8fa068013c2
	I0806 00:55:34.251345    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.251348    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.251508    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"1541","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0806 00:55:34.251727    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:34.251734    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.251740    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.251743    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.252663    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:34.252671    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.252678    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.252680    5434 round_trippers.go:580]     Audit-Id: 86232fe5-5540-4c80-847d-fd7de8db40dd
	I0806 00:55:34.252684    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.252688    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.252691    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.252693    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.252835    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1534","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0806 00:55:34.253021    5434 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.253028    5434 pod_ready.go:81] duration metric: took 2.771874ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.253034    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.437762    5434 request.go:629] Waited for 184.675553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:34.437887    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:55:34.437901    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.437913    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.437920    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.440660    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:34.440681    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.440691    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.440726    5434 round_trippers.go:580]     Audit-Id: f785c571-090b-442e-a6a8-eec70e5f8bc9
	I0806 00:55:34.440739    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.440745    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.440752    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.440761    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.441141    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d9c42","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe685526-4722-4113-b2b3-9a84182541b7","resourceVersion":"1300","creationTimestamp":"2024-08-06T07:52:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:52:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0806 00:55:34.636584    5434 request.go:629] Waited for 195.11039ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:34.636706    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:55:34.636716    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.636727    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.636737    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.638715    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:55:34.638730    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.638740    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.638750    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.638757    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.638764    5434 round_trippers.go:580]     Audit-Id: 16712eef-e2e5-4984-9a66-1e7088a908f1
	I0806 00:55:34.638768    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.638773    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.638994    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m03","uid":"3008e7de-9d1d-41e0-b794-0ab4c70ffeba","resourceVersion":"1326","creationTimestamp":"2024-08-06T07:53:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_53_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:53:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0806 00:55:34.639211    5434 pod_ready.go:92] pod "kube-proxy-d9c42" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:34.639222    5434 pod_ready.go:81] duration metric: took 386.175188ms for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.639231    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:34.837554    5434 request.go:629] Waited for 198.259941ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:34.837695    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:55:34.837706    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:34.837717    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:34.837722    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:34.840160    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:34.840190    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:34.840201    5434 round_trippers.go:580]     Audit-Id: 8daea4df-6291-470e-98df-86fa130a4477
	I0806 00:55:34.840207    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:34.840213    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:34.840220    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:34.840231    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:34.840236    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:34 GMT
	I0806 00:55:34.840470    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"1547","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0806 00:55:35.036652    5434 request.go:629] Waited for 195.891289ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:35.036755    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:55:35.036767    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.036778    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.036799    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.039118    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:55:35.039132    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.039139    5434 round_trippers.go:580]     Audit-Id: bf7d8dbb-663c-4c2c-a231-cee56db0c11c
	I0806 00:55:35.039143    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.039145    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.039174    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.039185    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.039190    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.039279    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:55:35.039527    5434 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:55:35.039538    5434 pod_ready.go:81] duration metric: took 400.291605ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:55:35.039546    5434 pod_ready.go:38] duration metric: took 4.309719242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:55:35.039564    5434 api_server.go:52] waiting for apiserver process to appear ...
	I0806 00:55:35.039630    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:35.052189    5434 command_runner.go:130] > 1781
	I0806 00:55:35.052291    5434 api_server.go:72] duration metric: took 15.082787345s to wait for apiserver process to appear ...
	I0806 00:55:35.052303    5434 api_server.go:88] waiting for apiserver healthz status ...
	I0806 00:55:35.052313    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:35.055676    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:55:35.055708    5434 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0806 00:55:35.055713    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.055719    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.055723    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.056340    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:55:35.056348    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.056353    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.056358    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.056362    5434 round_trippers.go:580]     Content-Length: 263
	I0806 00:55:35.056364    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.056367    5434 round_trippers.go:580]     Audit-Id: 73154886-0ddc-48d8-83b9-2382f7a5c2a0
	I0806 00:55:35.056369    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.056372    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.056380    5434 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0806 00:55:35.056401    5434 api_server.go:141] control plane version: v1.30.3
	I0806 00:55:35.056409    5434 api_server.go:131] duration metric: took 4.101304ms to wait for apiserver health ...
	I0806 00:55:35.056414    5434 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 00:55:35.236933    5434 request.go:629] Waited for 180.477708ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.236987    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.236995    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.237051    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.237059    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.240656    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.240666    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.240671    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.240674    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.240694    5434 round_trippers.go:580]     Audit-Id: c968a226-1560-4786-b55a-0b60e1c84edb
	I0806 00:55:35.240716    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.240739    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.240746    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.241438    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0806 00:55:35.243028    5434 system_pods.go:59] 10 kube-system pods found
	I0806 00:55:35.243040    5434 system_pods.go:61] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:55:35.243043    5434 system_pods.go:61] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:55:35.243046    5434 system_pods.go:61] "kindnet-dn72w" [34a2c1f4-38da-4e95-8d44-d2eae75e5dcb] Running
	I0806 00:55:35.243049    5434 system_pods.go:61] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:55:35.243052    5434 system_pods.go:61] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:55:35.243054    5434 system_pods.go:61] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:55:35.243057    5434 system_pods.go:61] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:55:35.243060    5434 system_pods.go:61] "kube-proxy-d9c42" [fe685526-4722-4113-b2b3-9a84182541b7] Running
	I0806 00:55:35.243062    5434 system_pods.go:61] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:55:35.243065    5434 system_pods.go:61] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:55:35.243069    5434 system_pods.go:74] duration metric: took 186.64791ms to wait for pod list to return data ...
	I0806 00:55:35.243074    5434 default_sa.go:34] waiting for default service account to be created ...
	I0806 00:55:35.437141    5434 request.go:629] Waited for 193.980924ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:55:35.437265    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0806 00:55:35.437276    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.437286    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.437295    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.440447    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.440462    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.440469    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.440473    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.440477    5434 round_trippers.go:580]     Content-Length: 262
	I0806 00:55:35.440481    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.440487    5434 round_trippers.go:580]     Audit-Id: e1253ccf-74ad-4370-b01a-74c5f89d2d5b
	I0806 00:55:35.440491    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.440493    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.440507    5434 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b920a0f4-26ad-4389-bfd3-1a9764da9619","resourceVersion":"336","creationTimestamp":"2024-08-06T07:38:14Z"}}]}
	I0806 00:55:35.440658    5434 default_sa.go:45] found service account: "default"
	I0806 00:55:35.440671    5434 default_sa.go:55] duration metric: took 197.58859ms for default service account to be created ...
	I0806 00:55:35.440682    5434 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 00:55:35.637354    5434 request.go:629] Waited for 196.567541ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.637426    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:55:35.637435    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.637469    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.637484    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.640994    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.641005    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.641010    5434 round_trippers.go:580]     Audit-Id: 75dc9c49-a29e-4d79-9d8b-13c10799dcee
	I0806 00:55:35.641014    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.641017    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.641020    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.641023    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.641027    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.641703    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0806 00:55:35.643246    5434 system_pods.go:86] 10 kube-system pods found
	I0806 00:55:35.643257    5434 system_pods.go:89] "coredns-7db6d8ff4d-snf8h" [80bd44de-6f91-4e47-8832-a66b3c64808d] Running
	I0806 00:55:35.643262    5434 system_pods.go:89] "etcd-multinode-100000" [227ab7d9-399e-4151-bee7-1520182e38fe] Running
	I0806 00:55:35.643267    5434 system_pods.go:89] "kindnet-dn72w" [34a2c1f4-38da-4e95-8d44-d2eae75e5dcb] Running
	I0806 00:55:35.643271    5434 system_pods.go:89] "kindnet-g2xk7" [84207ead-3403-4759-9bf2-ae0aa742699e] Running
	I0806 00:55:35.643275    5434 system_pods.go:89] "kube-apiserver-multinode-100000" [ce1dee9b-5f30-49a9-9066-7faf5f65c4d3] Running
	I0806 00:55:35.643279    5434 system_pods.go:89] "kube-controller-manager-multinode-100000" [cefe88fb-c337-47c3-b4f2-acdadde539f2] Running
	I0806 00:55:35.643283    5434 system_pods.go:89] "kube-proxy-crsrr" [f72beca3-9601-4aad-b3ba-33f8de5db052] Running
	I0806 00:55:35.643286    5434 system_pods.go:89] "kube-proxy-d9c42" [fe685526-4722-4113-b2b3-9a84182541b7] Running
	I0806 00:55:35.643297    5434 system_pods.go:89] "kube-scheduler-multinode-100000" [773d7bde-86f3-4e9d-b4aa-67ca3b345180] Running
	I0806 00:55:35.643300    5434 system_pods.go:89] "storage-provisioner" [38b20fa5-6002-4e12-860f-1aa0047581b1] Running
	I0806 00:55:35.643306    5434 system_pods.go:126] duration metric: took 202.613344ms to wait for k8s-apps to be running ...
	I0806 00:55:35.643314    5434 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:55:35.643362    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:55:35.655159    5434 system_svc.go:56] duration metric: took 11.839973ms WaitForService to wait for kubelet
	I0806 00:55:35.655174    5434 kubeadm.go:582] duration metric: took 15.685657412s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:55:35.655187    5434 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:55:35.837513    5434 request.go:629] Waited for 182.238504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:55:35.837562    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:55:35.837575    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:35.837674    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:35.837681    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:35.840771    5434 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0806 00:55:35.840788    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:35.840797    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:35.840801    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:35.840805    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:35.840809    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:35 GMT
	I0806 00:55:35.840813    5434 round_trippers.go:580]     Audit-Id: d5fa65d8-9d75-48e1-a19e-fc8717ce8edd
	I0806 00:55:35.840818    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:35.840995    5434 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1569"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0806 00:55:35.841382    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:35.841395    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:35.841404    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:55:35.841409    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:55:35.841418    5434 node_conditions.go:105] duration metric: took 186.219515ms to run NodePressure ...
	I0806 00:55:35.841429    5434 start.go:241] waiting for startup goroutines ...
	I0806 00:55:35.841437    5434 start.go:246] waiting for cluster config update ...
	I0806 00:55:35.841445    5434 start.go:255] writing updated cluster config ...
	I0806 00:55:35.863202    5434 out.go:177] 
	I0806 00:55:35.883985    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:35.884076    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:55:35.905924    5434 out.go:177] * Starting "multinode-100000-m02" worker node in "multinode-100000" cluster
	I0806 00:55:35.947857    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:55:35.947891    5434 cache.go:56] Caching tarball of preloaded images
	I0806 00:55:35.948065    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:55:35.948085    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:55:35.948216    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:55:35.949041    5434 start.go:360] acquireMachinesLock for multinode-100000-m02: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:55:35.949141    5434 start.go:364] duration metric: took 76.368µs to acquireMachinesLock for "multinode-100000-m02"
	I0806 00:55:35.949168    5434 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:55:35.949175    5434 fix.go:54] fixHost starting: m02
	I0806 00:55:35.949547    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:35.949564    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:35.958609    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53096
	I0806 00:55:35.958994    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:35.959363    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:35.959380    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:35.959624    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:35.959754    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:35.959842    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:55:35.959924    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:35.959995    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 4427
	I0806 00:55:35.960908    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid 4427 missing from process table
	I0806 00:55:35.960925    5434 fix.go:112] recreateIfNeeded on multinode-100000-m02: state=Stopped err=<nil>
	I0806 00:55:35.960936    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	W0806 00:55:35.961012    5434 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:55:35.981954    5434 out.go:177] * Restarting existing hyperkit VM for "multinode-100000-m02" ...
	I0806 00:55:36.023997    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .Start
	I0806 00:55:36.024279    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:36.024350    5434 main.go:141] libmachine: (multinode-100000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid
	I0806 00:55:36.026126    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid 4427 missing from process table
	I0806 00:55:36.026148    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | pid 4427 is in state "Stopped"
	I0806 00:55:36.026165    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid...
	I0806 00:55:36.026384    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Using UUID 11e38ce6-805a-4a8b-9cb1-968ee3a613d4
	I0806 00:55:36.053863    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Generated MAC ee:b:b7:3a:75:5c
	I0806 00:55:36.053890    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:55:36.054036    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:55:36.054065    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"11e38ce6-805a-4a8b-9cb1-968ee3a613d4", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:55:36.054112    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "11e38ce6-805a-4a8b-9cb1-968ee3a613d4", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:55:36.054150    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 11e38ce6-805a-4a8b-9cb1-968ee3a613d4 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/multinode-100000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:55:36.054170    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:55:36.055617    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 DEBUG: hyperkit: Pid is 5480
	I0806 00:55:36.056013    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Attempt 0
	I0806 00:55:36.056032    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:36.056086    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 5480
	I0806 00:55:36.058061    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Searching for ee:b:b7:3a:75:5c in /var/db/dhcpd_leases ...
	I0806 00:55:36.058156    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0806 00:55:36.058180    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b32856}
	I0806 00:55:36.058195    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b327da}
	I0806 00:55:36.058205    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32483}
	I0806 00:55:36.058212    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | Found match: ee:b:b7:3a:75:5c
	I0806 00:55:36.058221    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | IP: 192.169.0.14
	I0806 00:55:36.058273    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetConfigRaw
	I0806 00:55:36.058939    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:36.059162    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:55:36.059607    5434 machine.go:94] provisionDockerMachine start ...
	I0806 00:55:36.059631    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:36.059771    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:36.059905    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:36.060011    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:36.060138    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:36.060215    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:36.060317    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:36.060488    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:36.060498    5434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:55:36.063411    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:55:36.071802    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:55:36.072735    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:55:36.072761    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:55:36.072772    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:55:36.072788    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:55:36.457976    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:55:36.457992    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:55:36.572891    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:55:36.572918    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:55:36.572926    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:55:36.572933    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:55:36.573761    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:55:36.573770    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:55:42.151666    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:55:42.151706    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:55:42.151714    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:55:42.175264    5434 main.go:141] libmachine: (multinode-100000-m02) DBG | 2024/08/06 00:55:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:55:47.123974    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:55:47.123989    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:55:47.124115    5434 buildroot.go:166] provisioning hostname "multinode-100000-m02"
	I0806 00:55:47.124127    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:55:47.124228    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.124335    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.124426    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.124515    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.124628    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.124758    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.124888    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.124896    5434 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m02 && echo "multinode-100000-m02" | sudo tee /etc/hostname
	I0806 00:55:47.193924    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m02
	
	I0806 00:55:47.193947    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.194084    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.194175    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.194277    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.194381    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.194556    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.194713    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.194725    5434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:55:47.260861    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:55:47.260877    5434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:55:47.260890    5434 buildroot.go:174] setting up certificates
	I0806 00:55:47.260897    5434 provision.go:84] configureAuth start
	I0806 00:55:47.260905    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetMachineName
	I0806 00:55:47.261040    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:47.261134    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.261216    5434 provision.go:143] copyHostCerts
	I0806 00:55:47.261245    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:47.261296    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:55:47.261302    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:55:47.261431    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:55:47.261631    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:47.261668    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:55:47.261673    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:55:47.261752    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:55:47.261912    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:47.261943    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:55:47.261948    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:55:47.262015    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:55:47.262174    5434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-100000-m02]
	I0806 00:55:47.800015    5434 provision.go:177] copyRemoteCerts
	I0806 00:55:47.800090    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:55:47.800110    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.800265    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.800359    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.800444    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.800586    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:47.835822    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:55:47.835891    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:55:47.855534    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:55:47.855602    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:55:47.875212    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:55:47.875294    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:55:47.894813    5434 provision.go:87] duration metric: took 633.894969ms to configureAuth
	I0806 00:55:47.894825    5434 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:55:47.894996    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:47.895010    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:47.895165    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.895256    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.895340    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.895413    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.895512    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.895632    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.895760    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.895768    5434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:55:47.960699    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:55:47.960711    5434 buildroot.go:70] root file system type: tmpfs
	I0806 00:55:47.960788    5434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:55:47.960800    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:47.960931    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:47.961018    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.961118    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:47.961201    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:47.961325    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:47.961472    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:47.961517    5434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:55:48.030832    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:55:48.030856    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:48.030994    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:48.031096    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:48.031218    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:48.031324    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:48.031453    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:48.031608    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:48.031622    5434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:55:49.580688    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:55:49.580712    5434 machine.go:97] duration metric: took 13.520821195s to provisionDockerMachine
	I0806 00:55:49.580721    5434 start.go:293] postStartSetup for "multinode-100000-m02" (driver="hyperkit")
	I0806 00:55:49.580729    5434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:55:49.580741    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.580935    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:55:49.580949    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:49.581045    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.581137    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.581219    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.581315    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:49.618661    5434 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:55:49.621544    5434 command_runner.go:130] > NAME=Buildroot
	I0806 00:55:49.621558    5434 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:55:49.621564    5434 command_runner.go:130] > ID=buildroot
	I0806 00:55:49.621570    5434 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:55:49.621577    5434 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:55:49.621634    5434 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:55:49.621644    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:55:49.621733    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:55:49.621868    5434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:55:49.621877    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:55:49.622032    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:55:49.629218    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:49.648584    5434 start.go:296] duration metric: took 67.854005ms for postStartSetup
	I0806 00:55:49.648604    5434 fix.go:56] duration metric: took 13.699160102s for fixHost
	I0806 00:55:49.648620    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:49.648751    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.648847    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.648955    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.649053    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.649169    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:55:49.649301    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0806 00:55:49.649308    5434 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:55:49.708465    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722930949.775003266
	
	I0806 00:55:49.708477    5434 fix.go:216] guest clock: 1722930949.775003266
	I0806 00:55:49.708483    5434 fix.go:229] Guest: 2024-08-06 00:55:49.775003266 -0700 PDT Remote: 2024-08-06 00:55:49.648611 -0700 PDT m=+56.909349334 (delta=126.392266ms)
	I0806 00:55:49.708493    5434 fix.go:200] guest clock delta is within tolerance: 126.392266ms
	I0806 00:55:49.708497    5434 start.go:83] releasing machines lock for "multinode-100000-m02", held for 13.759075291s
	I0806 00:55:49.708513    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.708635    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:49.732535    5434 out.go:177] * Found network options:
	I0806 00:55:49.751749    5434 out.go:177]   - NO_PROXY=192.169.0.13
	W0806 00:55:49.772913    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:55:49.772952    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.773817    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.774060    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:49.774180    5434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:55:49.774221    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	W0806 00:55:49.774299    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:55:49.774413    5434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:55:49.774425    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.774433    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:49.774632    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.774664    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:49.774853    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.774879    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:49.775039    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:49.775067    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:49.775186    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:49.810620    5434 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:55:49.810895    5434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:55:49.810953    5434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:55:49.857296    5434 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:55:49.857332    5434 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:55:49.857355    5434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:55:49.857365    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:49.857468    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:49.872692    5434 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:55:49.873028    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:55:49.882153    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:55:49.890973    5434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:55:49.891028    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:55:49.899958    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:49.908743    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:55:49.917593    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:55:49.926553    5434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:55:49.935690    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:55:49.948327    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:55:49.962759    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:55:49.973687    5434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:55:49.984291    5434 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:55:49.984563    5434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:55:49.996230    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:50.092608    5434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:55:50.109699    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:55:50.109769    5434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:55:50.121516    5434 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:55:50.121774    5434 command_runner.go:130] > [Unit]
	I0806 00:55:50.121784    5434 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:55:50.121789    5434 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:55:50.121793    5434 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:55:50.121797    5434 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:55:50.121802    5434 command_runner.go:130] > StartLimitBurst=3
	I0806 00:55:50.121810    5434 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:55:50.121814    5434 command_runner.go:130] > [Service]
	I0806 00:55:50.121817    5434 command_runner.go:130] > Type=notify
	I0806 00:55:50.121820    5434 command_runner.go:130] > Restart=on-failure
	I0806 00:55:50.121824    5434 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:55:50.121830    5434 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:55:50.121835    5434 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:55:50.121841    5434 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:55:50.121847    5434 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:55:50.121852    5434 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:55:50.121857    5434 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:55:50.121866    5434 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:55:50.121876    5434 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:55:50.121882    5434 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:55:50.121885    5434 command_runner.go:130] > ExecStart=
	I0806 00:55:50.121901    5434 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:55:50.121909    5434 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:55:50.121916    5434 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:55:50.121921    5434 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:55:50.121925    5434 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:55:50.121930    5434 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:55:50.121935    5434 command_runner.go:130] > LimitCORE=infinity
	I0806 00:55:50.121942    5434 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:55:50.121947    5434 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:55:50.121951    5434 command_runner.go:130] > TasksMax=infinity
	I0806 00:55:50.121955    5434 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:55:50.121985    5434 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:55:50.121992    5434 command_runner.go:130] > Delegate=yes
	I0806 00:55:50.121997    5434 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:55:50.122010    5434 command_runner.go:130] > KillMode=process
	I0806 00:55:50.122016    5434 command_runner.go:130] > [Install]
	I0806 00:55:50.122022    5434 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:55:50.122096    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:50.139137    5434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:55:50.154045    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:55:50.165113    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:50.175785    5434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:55:50.197105    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:55:50.207733    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:55:50.223158    5434 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:55:50.223413    5434 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:55:50.226354    5434 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:55:50.226523    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:55:50.233762    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:55:50.247450    5434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:55:50.342692    5434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:55:50.443763    5434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:55:50.443793    5434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:55:50.457932    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:50.549367    5434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:55:52.862125    5434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.312693665s)
	I0806 00:55:52.862187    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 00:55:52.872409    5434 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 00:55:52.885181    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:52.895674    5434 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 00:55:52.993698    5434 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 00:55:53.084996    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:53.177209    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 00:55:53.191294    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 00:55:53.202769    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:53.315208    5434 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 00:55:53.375448    5434 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 00:55:53.375521    5434 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 00:55:53.379714    5434 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0806 00:55:53.379725    5434 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0806 00:55:53.379729    5434 command_runner.go:130] > Device: 0,22	Inode: 749         Links: 1
	I0806 00:55:53.379738    5434 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0806 00:55:53.379743    5434 command_runner.go:130] > Access: 2024-08-06 07:55:53.393995737 +0000
	I0806 00:55:53.379752    5434 command_runner.go:130] > Modify: 2024-08-06 07:55:53.393995737 +0000
	I0806 00:55:53.379756    5434 command_runner.go:130] > Change: 2024-08-06 07:55:53.395995436 +0000
	I0806 00:55:53.379759    5434 command_runner.go:130] >  Birth: -
	I0806 00:55:53.379848    5434 start.go:563] Will wait 60s for crictl version
	I0806 00:55:53.379892    5434 ssh_runner.go:195] Run: which crictl
	I0806 00:55:53.382613    5434 command_runner.go:130] > /usr/bin/crictl
	I0806 00:55:53.382774    5434 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 00:55:53.409192    5434 command_runner.go:130] > Version:  0.1.0
	I0806 00:55:53.409227    5434 command_runner.go:130] > RuntimeName:  docker
	I0806 00:55:53.409267    5434 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0806 00:55:53.409350    5434 command_runner.go:130] > RuntimeApiVersion:  v1
	I0806 00:55:53.410603    5434 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 00:55:53.410671    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:53.426368    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:53.427211    5434 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 00:55:53.444242    5434 command_runner.go:130] > 27.1.1
	I0806 00:55:53.466673    5434 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0806 00:55:53.508034    5434 out.go:177]   - env NO_PROXY=192.169.0.13
	I0806 00:55:53.529420    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetIP
	I0806 00:55:53.529824    5434 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0806 00:55:53.534548    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:53.544263    5434 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:55:53.544442    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:53.544650    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:53.544664    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:53.553344    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53117
	I0806 00:55:53.553689    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:53.554007    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:53.554017    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:53.554209    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:53.554331    5434 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:55:53.554416    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:53.554495    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:55:53.555418    5434 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:55:53.555667    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:53.555683    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:53.564417    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53119
	I0806 00:55:53.564918    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:53.565269    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:53.565286    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:53.565510    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:53.565629    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:53.565741    5434 certs.go:68] Setting up /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000 for IP: 192.169.0.14
	I0806 00:55:53.565747    5434 certs.go:194] generating shared ca certs ...
	I0806 00:55:53.565760    5434 certs.go:226] acquiring lock for ca certs: {Name:mk58145664d6c2b1eff70ba1600cc91cf1a11355 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:55:53.565915    5434 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key
	I0806 00:55:53.565968    5434 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key
	I0806 00:55:53.565978    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0806 00:55:53.566002    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0806 00:55:53.566021    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0806 00:55:53.566039    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0806 00:55:53.566128    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem (1338 bytes)
	W0806 00:55:53.566170    5434 certs.go:480] ignoring /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437_empty.pem, impossibly tiny 0 bytes
	I0806 00:55:53.566180    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem (1675 bytes)
	I0806 00:55:53.566213    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem (1078 bytes)
	I0806 00:55:53.566246    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem (1123 bytes)
	I0806 00:55:53.566280    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem (1679 bytes)
	I0806 00:55:53.566352    5434 certs.go:484] found cert: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:55:53.566388    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.566408    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem -> /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.566426    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.566457    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0806 00:55:53.586672    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0806 00:55:53.606199    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0806 00:55:53.625918    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0806 00:55:53.647471    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0806 00:55:53.667119    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/1437.pem --> /usr/share/ca-certificates/1437.pem (1338 bytes)
	I0806 00:55:53.686966    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /usr/share/ca-certificates/14372.pem (1708 bytes)
	I0806 00:55:53.706845    5434 ssh_runner.go:195] Run: openssl version
	I0806 00:55:53.711070    5434 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0806 00:55:53.711296    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14372.pem && ln -fs /usr/share/ca-certificates/14372.pem /etc/ssl/certs/14372.pem"
	I0806 00:55:53.719956    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.723288    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.723381    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  6 07:14 /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.723416    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14372.pem
	I0806 00:55:53.727666    5434 command_runner.go:130] > 3ec20f2e
	I0806 00:55:53.727892    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14372.pem /etc/ssl/certs/3ec20f2e.0"
	I0806 00:55:53.736269    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0806 00:55:53.744816    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.748287    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.748379    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  6 07:05 /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.748413    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0806 00:55:53.752496    5434 command_runner.go:130] > b5213941
	I0806 00:55:53.752660    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0806 00:55:53.761114    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1437.pem && ln -fs /usr/share/ca-certificates/1437.pem /etc/ssl/certs/1437.pem"
	I0806 00:55:53.769715    5434 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.773215    5434 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.773323    5434 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  6 07:14 /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.773361    5434 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1437.pem
	I0806 00:55:53.777430    5434 command_runner.go:130] > 51391683
	I0806 00:55:53.777626    5434 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1437.pem /etc/ssl/certs/51391683.0"
	I0806 00:55:53.786231    5434 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0806 00:55:53.789398    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:55:53.789477    5434 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0806 00:55:53.789506    5434 kubeadm.go:934] updating node {m02 192.169.0.14 8443 v1.30.3 docker false true} ...
	I0806 00:55:53.789573    5434 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-100000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.14
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0806 00:55:53.789639    5434 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0806 00:55:53.796954    5434 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	I0806 00:55:53.796973    5434 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.3: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.3': No such file or directory
	
	Initiating transfer...
	I0806 00:55:53.797009    5434 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.3
	I0806 00:55:53.804541    5434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubectl.sha256
	I0806 00:55:53.804541    5434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubeadm.sha256
	I0806 00:55:53.804555    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubectl -> /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 00:55:53.804559    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubeadm -> /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 00:55:53.804541    5434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/linux/amd64/kubelet.sha256
	I0806 00:55:53.804607    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:55:53.804661    5434 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm
	I0806 00:55:53.804681    5434 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl
	I0806 00:55:53.816499    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 00:55:53.816516    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 00:55:53.816499    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubelet -> /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 00:55:53.816527    5434 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubeadm': No such file or directory
	I0806 00:55:53.816546    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubeadm --> /var/lib/minikube/binaries/v1.30.3/kubeadm (50249880 bytes)
	I0806 00:55:53.816560    5434 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubectl': No such file or directory
	I0806 00:55:53.816578    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubectl --> /var/lib/minikube/binaries/v1.30.3/kubectl (51454104 bytes)
	I0806 00:55:53.816648    5434 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet
	I0806 00:55:53.829730    5434 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 00:55:53.831208    5434 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.3/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.3/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.3/kubelet': No such file or directory
	I0806 00:55:53.831243    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/cache/linux/amd64/v1.30.3/kubelet --> /var/lib/minikube/binaries/v1.30.3/kubelet (100125080 bytes)
	I0806 00:55:54.414827    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0806 00:55:54.422218    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0806 00:55:54.435907    5434 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0806 00:55:54.449543    5434 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0806 00:55:54.452507    5434 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0806 00:55:54.461871    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:55:54.556386    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:55:54.571102    5434 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:55:54.571378    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.571396    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.580196    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53121
	I0806 00:55:54.580556    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.580896    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.580908    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.581102    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.581228    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:54.581317    5434 start.go:317] joinCluster: &{Name:multinode-100000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:multinode-100000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:55:54.581413    5434 start.go:330] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:55:54.581430    5434 host.go:66] Checking if "multinode-100000-m02" exists ...
	I0806 00:55:54.581682    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.581718    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.590743    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53123
	I0806 00:55:54.591100    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.591441    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.591451    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.591650    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.591769    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .DriverName
	I0806 00:55:54.591858    5434 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:55:54.592019    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:55:54.592247    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.592264    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.601054    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53125
	I0806 00:55:54.601443    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.601863    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.601879    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.602097    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.602211    5434 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:55:54.602312    5434 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:55:54.602385    5434 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:55:54.603346    5434 host.go:66] Checking if "multinode-100000" exists ...
	I0806 00:55:54.603595    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:55:54.603624    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:55:54.612349    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53127
	I0806 00:55:54.612687    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:55:54.613044    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:55:54.613056    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:55:54.613246    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:55:54.613351    5434 main.go:141] libmachine: (multinode-100000) Calling .DriverName
	I0806 00:55:54.613444    5434 api_server.go:166] Checking apiserver status ...
	I0806 00:55:54.613491    5434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:55:54.613502    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:55:54.613579    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:55:54.613651    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:55:54.613732    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:55:54.613808    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:55:54.655316    5434 command_runner.go:130] > 1781
	I0806 00:55:54.655501    5434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1781/cgroup
	W0806 00:55:54.662582    5434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1781/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:55:54.662643    5434 ssh_runner.go:195] Run: ls
	I0806 00:55:54.665808    5434 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0806 00:55:54.668845    5434 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0806 00:55:54.668896    5434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-100000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0806 00:55:54.733926    5434 command_runner.go:130] ! Error from server (NotFound): nodes "multinode-100000-m02" not found
	W0806 00:55:54.734037    5434 node.go:126] kubectl drain node "multinode-100000-m02" failed (will continue): sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl drain multinode-100000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: Process exited with status 1
	stdout:
	
	stderr:
	Error from server (NotFound): nodes "multinode-100000-m02" not found
	I0806 00:55:54.734070    5434 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0806 00:55:54.734088    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHHostname
	I0806 00:55:54.734240    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHPort
	I0806 00:55:54.734347    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHKeyPath
	I0806 00:55:54.734435    5434 main.go:141] libmachine: (multinode-100000-m02) Calling .GetSSHUsername
	I0806 00:55:54.734517    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m02/id_rsa Username:docker}
	I0806 00:55:54.797275    5434 command_runner.go:130] ! W0806 07:55:54.866759    1260 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0806 00:55:54.823087    5434 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:55:54.823102    5434 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0806 00:55:54.823107    5434 command_runner.go:130] > [reset] Stopping the kubelet service
	I0806 00:55:54.823111    5434 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0806 00:55:54.823127    5434 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0806 00:55:54.823144    5434 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0806 00:55:54.823151    5434 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0806 00:55:54.823162    5434 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0806 00:55:54.823168    5434 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0806 00:55:54.823174    5434 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0806 00:55:54.823178    5434 command_runner.go:130] > to reset your system's IPVS tables.
	I0806 00:55:54.823184    5434 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0806 00:55:54.823193    5434 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0806 00:55:54.823204    5434 node.go:155] successfully reset node "multinode-100000-m02"
	I0806 00:55:54.823484    5434 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:55:54.823679    5434 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1231e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:55:54.823941    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:54.823978    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:54.823982    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:54.823989    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:54.823992    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:54.823995    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:54.825954    5434 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0806 00:55:54.825963    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:54.825968    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:54.825971    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:54.825974    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:54 GMT
	I0806 00:55:54.825977    5434 round_trippers.go:580]     Audit-Id: f9cc527f-3ff5-4bdd-b5d8-c4395c20aaeb
	I0806 00:55:54.825980    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:54.825983    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:54.825986    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:54.825995    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:54.826112    5434 retry.go:31] will retry after 400.706988ms: nodes "multinode-100000-m02" not found
	I0806 00:55:55.227941    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:55.228047    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:55.228058    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:55.228073    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:55.228078    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:55.228084    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:55.230674    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:55.230689    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:55.230699    5434 round_trippers.go:580]     Audit-Id: 40f81924-8d65-45c8-a203-7d049f0949e2
	I0806 00:55:55.230709    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:55.230714    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:55.230721    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:55.230726    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:55.230731    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:55.230735    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:55 GMT
	I0806 00:55:55.230778    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:55.230842    5434 retry.go:31] will retry after 1.108023885s: nodes "multinode-100000-m02" not found
	I0806 00:55:56.340676    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:56.340736    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:56.340746    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:56.340758    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:56.340764    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:56.340768    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:56.343215    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:56.343230    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:56.343238    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:56.343243    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:56.343246    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:56 GMT
	I0806 00:55:56.343249    5434 round_trippers.go:580]     Audit-Id: 2636c4bd-bdb6-46cd-9b18-05ba8b8e091f
	I0806 00:55:56.343253    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:56.343257    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:56.343260    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:56.343274    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:56.343331    5434 retry.go:31] will retry after 1.598856034s: nodes "multinode-100000-m02" not found
	I0806 00:55:57.943718    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:57.943867    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:57.943879    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:57.943891    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:57.943899    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:57.943909    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:57.946570    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:57.946586    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:57.946594    5434 round_trippers.go:580]     Audit-Id: e8afe910-bb1b-449c-a493-ca9d08761708
	I0806 00:55:57.946598    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:57.946602    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:57.946605    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:57.946608    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:57.946613    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:57.946616    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:58 GMT
	I0806 00:55:57.946629    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:57.946696    5434 retry.go:31] will retry after 1.373802876s: nodes "multinode-100000-m02" not found
	I0806 00:55:59.322365    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:55:59.322541    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:55:59.322565    5434 round_trippers.go:469] Request Headers:
	I0806 00:55:59.322578    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:55:59.322588    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:55:59.322596    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:55:59.324950    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:55:59.324969    5434 round_trippers.go:577] Response Headers:
	I0806 00:55:59.324985    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:55:59.324989    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:55:59.324993    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:55:59.324997    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:55:59.325001    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:55:59.325006    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:55:59 GMT
	I0806 00:55:59.325010    5434 round_trippers.go:580]     Audit-Id: e2af268c-a73a-490e-ab45-ea3236b146b1
	I0806 00:55:59.325022    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:55:59.325079    5434 retry.go:31] will retry after 3.775436146s: nodes "multinode-100000-m02" not found
	I0806 00:56:03.102194    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:03.102283    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:03.102292    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:03.102303    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:03.102311    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:03.102317    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:03.104958    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:03.104973    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:03.104980    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:03.104985    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:03.104989    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:03.104993    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:03 GMT
	I0806 00:56:03.104998    5434 round_trippers.go:580]     Audit-Id: 98aca243-affb-43c7-9161-6014c5c31359
	I0806 00:56:03.105003    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:03.105007    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:03.105025    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:03.105086    5434 retry.go:31] will retry after 4.446851201s: nodes "multinode-100000-m02" not found
	I0806 00:56:07.553307    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:07.553452    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:07.553463    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:07.553474    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:07.553481    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:07.553487    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:07.556261    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:07.556275    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:07.556282    5434 round_trippers.go:580]     Audit-Id: 8cbfa2ad-f1e0-435a-814d-b9df93541a97
	I0806 00:56:07.556305    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:07.556314    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:07.556320    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:07.556325    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:07.556328    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:07.556333    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:07 GMT
	I0806 00:56:07.556352    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:07.556412    5434 retry.go:31] will retry after 7.516844959s: nodes "multinode-100000-m02" not found
	I0806 00:56:15.073758    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:15.073882    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:15.073893    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:15.073902    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:15.073918    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:15.073927    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:15.076399    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:15.076414    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:15.076421    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:15.076425    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:15.076429    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:15.076433    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:15.076437    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:15 GMT
	I0806 00:56:15.076441    5434 round_trippers.go:580]     Audit-Id: eccd1fc1-72fa-4e6e-b254-5b88385411f9
	I0806 00:56:15.076446    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:15.076458    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:15.076536    5434 retry.go:31] will retry after 10.77059598s: nodes "multinode-100000-m02" not found
	I0806 00:56:25.849418    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:25.849492    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:25.849503    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:25.849515    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:25.849531    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:25.849537    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:25.852375    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:25.852390    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:25.852398    5434 round_trippers.go:580]     Audit-Id: dfaf584b-983a-4352-8ddc-170ef007830f
	I0806 00:56:25.852403    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:25.852407    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:25.852411    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:25.852415    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:25.852420    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:25.852424    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:25 GMT
	I0806 00:56:25.852442    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:25.852512    5434 retry.go:31] will retry after 10.459387207s: nodes "multinode-100000-m02" not found
	I0806 00:56:36.312777    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:36.312919    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:36.312928    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:36.312938    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:36.312946    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:36.312950    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:36.315607    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:36.315631    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:36.315640    5434 round_trippers.go:580]     Audit-Id: cdb9cd6a-848e-43df-878d-ce6bd2124463
	I0806 00:56:36.315644    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:36.315648    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:36.315653    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:36.315660    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:36.315665    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:36.315669    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:36 GMT
	I0806 00:56:36.315686    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:36.315747    5434 retry.go:31] will retry after 23.324068664s: nodes "multinode-100000-m02" not found
	I0806 00:56:59.641144    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:56:59.641201    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:56:59.641225    5434 round_trippers.go:469] Request Headers:
	I0806 00:56:59.641239    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:56:59.641249    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:56:59.641257    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:56:59.643697    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:56:59.643714    5434 round_trippers.go:577] Response Headers:
	I0806 00:56:59.643720    5434 round_trippers.go:580]     Audit-Id: 38b7f694-4a26-4adb-ab3a-228ffe36e476
	I0806 00:56:59.643724    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:56:59.643727    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:56:59.643731    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:56:59.643735    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:56:59.643738    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:56:59.643741    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:56:59 GMT
	I0806 00:56:59.643754    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	I0806 00:56:59.643814    5434 retry.go:31] will retry after 37.697414419s: nodes "multinode-100000-m02" not found
	I0806 00:57:37.342702    5434 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0806 00:57:37.342800    5434 round_trippers.go:463] DELETE https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:37.342810    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:37.342821    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:37.342830    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:37.342835    5434 round_trippers.go:473]     Content-Type: application/json
	I0806 00:57:37.345526    5434 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0806 00:57:37.345538    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:37.345545    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:37.345550    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:37.345555    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:37.345566    5434 round_trippers.go:580]     Content-Length: 210
	I0806 00:57:37.345570    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:37 GMT
	I0806 00:57:37.345573    5434 round_trippers.go:580]     Audit-Id: b9d864fc-012a-495b-95ff-adf144d59a54
	I0806 00:57:37.345578    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:37.345617    5434 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-100000-m02\" not found","reason":"NotFound","details":{"name":"multinode-100000-m02","kind":"nodes"},"code":404}
	E0806 00:57:37.345675    5434 node.go:177] kubectl delete node "multinode-100000-m02" failed: nodes "multinode-100000-m02" not found
	E0806 00:57:37.345697    5434 start.go:332] error removing existing worker node "m02" before rejoining cluster, will continue anyway: nodes "multinode-100000-m02" not found
	I0806 00:57:37.345704    5434 start.go:334] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:57:37.345721    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0806 00:57:37.345736    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHHostname
	I0806 00:57:37.345908    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHPort
	I0806 00:57:37.346039    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHKeyPath
	I0806 00:57:37.346180    5434 main.go:141] libmachine: (multinode-100000) Calling .GetSSHUsername
	I0806 00:57:37.346287    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000/id_rsa Username:docker}
	I0806 00:57:37.440608    5434 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 7th74k.mbppog0s62qzrc0x --discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 00:57:37.440643    5434 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:57:37.440664    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 7th74k.mbppog0s62qzrc0x --discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-100000-m02"
	I0806 00:57:37.471583    5434 command_runner.go:130] > [preflight] Running pre-flight checks
	I0806 00:57:37.570516    5434 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0806 00:57:37.570539    5434 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0806 00:57:37.602396    5434 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0806 00:57:37.602415    5434 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0806 00:57:37.602420    5434 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0806 00:57:37.711821    5434 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0806 00:57:38.219224    5434 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 507.362685ms
	I0806 00:57:38.219246    5434 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0806 00:57:38.228999    5434 command_runner.go:130] > This node has joined the cluster:
	I0806 00:57:38.229014    5434 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0806 00:57:38.229019    5434 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0806 00:57:38.229024    5434 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0806 00:57:38.230556    5434 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 00:57:38.230752    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0806 00:57:38.455515    5434 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0806 00:57:38.455597    5434 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-100000-m02 minikube.k8s.io/updated_at=2024_08_06T00_57_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=multinode-100000 minikube.k8s.io/primary=false
	I0806 00:57:38.532968    5434 command_runner.go:130] > node/multinode-100000-m02 labeled
	I0806 00:57:38.534104    5434 start.go:319] duration metric: took 1m43.950741215s to joinCluster
	I0806 00:57:38.534142    5434 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0806 00:57:38.534348    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:57:38.556945    5434 out.go:177] * Verifying Kubernetes components...
	I0806 00:57:38.616434    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:57:38.711085    5434 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 00:57:38.723307    5434 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:57:38.723529    5434 kapi.go:59] client config for multinode-100000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/client.key", CAFile:"/Users/jenkins/minikube-integration/19370-944/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1231e1a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0806 00:57:38.723727    5434 node_ready.go:35] waiting up to 6m0s for node "multinode-100000-m02" to be "Ready" ...
	I0806 00:57:38.723769    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:38.723773    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:38.723779    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:38.723783    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:38.725173    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:38.725181    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:38.725191    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:38.725197    5434 round_trippers.go:580]     Content-Length: 3920
	I0806 00:57:38.725201    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:38 GMT
	I0806 00:57:38.725205    5434 round_trippers.go:580]     Audit-Id: 3bab3b05-f565-4c20-9491-957d949d06b6
	I0806 00:57:38.725209    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:38.725215    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:38.725219    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:38.725301    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1698","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 2896 chars]
	I0806 00:57:39.223944    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:39.223970    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:39.223980    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:39.224064    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:39.226799    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:39.226812    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:39.226819    5434 round_trippers.go:580]     Audit-Id: c3e00223-64dc-4c31-ae51-85dc85e235a3
	I0806 00:57:39.226822    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:39.226826    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:39.226830    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:39.226833    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:39.226837    5434 round_trippers.go:580]     Content-Length: 3920
	I0806 00:57:39.226842    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:39 GMT
	I0806 00:57:39.226909    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1698","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 2896 chars]
	I0806 00:57:39.724720    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:39.724741    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:39.724753    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:39.724761    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:39.727231    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:39.727246    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:39.727254    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:39.727259    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:39 GMT
	I0806 00:57:39.727263    5434 round_trippers.go:580]     Audit-Id: b5469a79-afa2-4116-b64c-cce3265be3e2
	I0806 00:57:39.727266    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:39.727273    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:39.727276    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:39.727280    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:39.727343    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:40.224760    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:40.224779    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:40.224787    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:40.224793    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:40.226812    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:40.226822    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:40.226827    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:40.226829    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:40.226832    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:40.226834    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:40.226837    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:40 GMT
	I0806 00:57:40.226840    5434 round_trippers.go:580]     Audit-Id: 35941e6b-5c8b-4737-aee3-5730c81b0175
	I0806 00:57:40.226843    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:40.226886    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:40.726095    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:40.726116    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:40.726128    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:40.726135    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:40.728529    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:40.728544    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:40.728550    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:40.728554    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:40 GMT
	I0806 00:57:40.728557    5434 round_trippers.go:580]     Audit-Id: 9616e15e-e7ac-4eef-8533-a6e9e989cd1d
	I0806 00:57:40.728562    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:40.728571    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:40.728574    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:40.728577    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:40.728638    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:40.728834    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:41.224765    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:41.224781    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:41.224794    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:41.224799    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:41.226476    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:41.226488    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:41.226494    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:41.226498    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:41.226502    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:41.226516    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:41 GMT
	I0806 00:57:41.226523    5434 round_trippers.go:580]     Audit-Id: c547ed13-66a3-476a-8a5c-3e377cd019d1
	I0806 00:57:41.226526    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:41.226528    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:41.226583    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:41.725236    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:41.725248    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:41.725259    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:41.725277    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:41.726907    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:41.726930    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:41.726937    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:41 GMT
	I0806 00:57:41.726942    5434 round_trippers.go:580]     Audit-Id: 3caf33af-9cac-487e-b195-58119f24d22e
	I0806 00:57:41.726945    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:41.726951    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:41.726955    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:41.726957    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:41.726960    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:41.727005    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:42.224914    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:42.224930    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:42.224937    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:42.224940    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:42.226393    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:42.226406    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:42.226413    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:42.226418    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:42.226421    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:42.226424    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:42.226428    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:42 GMT
	I0806 00:57:42.226438    5434 round_trippers.go:580]     Audit-Id: 8776b9b1-351f-4d08-86af-84f455f47b75
	I0806 00:57:42.226441    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:42.226470    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:42.723974    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:42.724030    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:42.724036    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:42.724040    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:42.725742    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:42.725755    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:42.725764    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:42.725768    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:42.725772    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:42.725785    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:42.725791    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:42 GMT
	I0806 00:57:42.725794    5434 round_trippers.go:580]     Audit-Id: a21dfbc2-bc52-48a4-92db-8cb89208b4f7
	I0806 00:57:42.725797    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:42.725850    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:43.224069    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:43.224081    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:43.224087    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:43.224090    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:43.225620    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:43.225631    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:43.225637    5434 round_trippers.go:580]     Audit-Id: 0c1fa402-7a17-4180-ad97-85a88e052223
	I0806 00:57:43.225640    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:43.225650    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:43.225655    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:43.225657    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:43.225660    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:43.225666    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:43 GMT
	I0806 00:57:43.225708    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:43.225863    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:43.725011    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:43.725026    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:43.725032    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:43.725036    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:43.726950    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:43.726962    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:43.726968    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:43.726972    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:43.726974    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:43.726977    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:43.726980    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:43.726983    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:43 GMT
	I0806 00:57:43.726987    5434 round_trippers.go:580]     Audit-Id: 2ec77324-ad35-4954-86c3-1cd63f932963
	I0806 00:57:43.727037    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:44.224193    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:44.224209    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:44.224216    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:44.224219    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:44.225890    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:44.225904    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:44.225913    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:44.225919    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:44.225925    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:44.225929    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:44.225933    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:44 GMT
	I0806 00:57:44.225936    5434 round_trippers.go:580]     Audit-Id: ee0f4f06-8dc0-48ba-a078-5881e2527bb5
	I0806 00:57:44.225940    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:44.225999    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:44.724507    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:44.724533    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:44.724558    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:44.724606    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:44.727414    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:44.727430    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:44.727437    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:44.727442    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:44.727446    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:44.727450    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:44 GMT
	I0806 00:57:44.727455    5434 round_trippers.go:580]     Audit-Id: 61c9d604-08b6-44d8-af4a-18e3dce1db78
	I0806 00:57:44.727459    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:44.727462    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:44.727524    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:45.225103    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:45.225132    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:45.225145    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:45.225151    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:45.228092    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:45.228108    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:45.228116    5434 round_trippers.go:580]     Audit-Id: fa82d64a-5a25-4f13-beb3-16ddfa3bedb5
	I0806 00:57:45.228122    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:45.228126    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:45.228130    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:45.228134    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:45.228138    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:45.228142    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:45 GMT
	I0806 00:57:45.228204    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:45.228411    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:45.724009    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:45.724021    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:45.724027    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:45.724030    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:45.725579    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:45.725590    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:45.725615    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:45 GMT
	I0806 00:57:45.725626    5434 round_trippers.go:580]     Audit-Id: bb1e2183-350f-4b0d-b3a0-d525e6313f9d
	I0806 00:57:45.725630    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:45.725633    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:45.725652    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:45.725659    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:45.725663    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:45.725693    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:46.224012    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:46.224047    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:46.224057    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:46.224063    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:46.225437    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:46.225447    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:46.225452    5434 round_trippers.go:580]     Audit-Id: cb77a9df-a7b2-4a13-aabf-c2368fecaf1c
	I0806 00:57:46.225455    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:46.225458    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:46.225460    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:46.225463    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:46.225467    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:46.225469    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:46 GMT
	I0806 00:57:46.225517    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:46.725493    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:46.725529    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:46.725537    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:46.725542    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:46.727073    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:46.727083    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:46.727088    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:46.727091    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:46.727095    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:46.727098    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:46.727100    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:46.727104    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:46 GMT
	I0806 00:57:46.727107    5434 round_trippers.go:580]     Audit-Id: b8563860-357b-4864-a6e0-acd9dad98a47
	I0806 00:57:46.727198    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:47.224885    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:47.224941    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:47.224948    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:47.224952    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:47.226603    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:47.226615    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:47.226620    5434 round_trippers.go:580]     Audit-Id: e1431d28-11d5-4df9-a7ae-03ec8429670c
	I0806 00:57:47.226624    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:47.226626    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:47.226629    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:47.226631    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:47.226634    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:47.226636    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:47 GMT
	I0806 00:57:47.226709    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:47.726241    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:47.726269    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:47.726281    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:47.726296    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:47.728603    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:47.728625    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:47.728642    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:47.728653    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:47.728658    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:47.728662    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:47.728666    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:47 GMT
	I0806 00:57:47.728670    5434 round_trippers.go:580]     Audit-Id: 32992b15-0a16-485d-8662-9084c12f8e92
	I0806 00:57:47.728673    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:47.728738    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:47.728930    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:48.224302    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:48.224324    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:48.224335    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:48.224341    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:48.226950    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:48.226965    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:48.226977    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:48.226982    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:48.226985    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:48.227007    5434 round_trippers.go:580]     Content-Length: 4029
	I0806 00:57:48.227013    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:48 GMT
	I0806 00:57:48.227017    5434 round_trippers.go:580]     Audit-Id: 54295014-f023-493a-b7b2-002d81b0a3f7
	I0806 00:57:48.227023    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:48.227096    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1705","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3005 chars]
	I0806 00:57:48.724182    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:48.724203    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:48.724214    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:48.724221    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:48.726465    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:48.726486    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:48.726498    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:48.726533    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:48 GMT
	I0806 00:57:48.726542    5434 round_trippers.go:580]     Audit-Id: afcb1c88-7e72-4ed5-8c92-c8a6f6febea4
	I0806 00:57:48.726546    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:48.726549    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:48.726553    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:48.726633    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:49.224719    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:49.224748    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:49.224760    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:49.224765    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:49.227461    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:49.227480    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:49.227496    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:49.227502    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:49 GMT
	I0806 00:57:49.227506    5434 round_trippers.go:580]     Audit-Id: 12afd45f-a2ec-486f-8f90-4bb62c2151b4
	I0806 00:57:49.227511    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:49.227515    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:49.227518    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:49.227776    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:49.724496    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:49.724527    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:49.724539    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:49.724546    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:49.727213    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:49.727228    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:49.727235    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:49.727239    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:49.727243    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:49.727247    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:49 GMT
	I0806 00:57:49.727250    5434 round_trippers.go:580]     Audit-Id: 3261aa2b-ae3b-4565-98cd-3ebf59e0fd3b
	I0806 00:57:49.727255    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:49.727470    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:50.225030    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:50.225053    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:50.225065    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:50.225070    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:50.227951    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:50.227966    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:50.227984    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:50.228023    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:50 GMT
	I0806 00:57:50.228032    5434 round_trippers.go:580]     Audit-Id: 729a3da8-4502-4f59-8163-c1cbbf872830
	I0806 00:57:50.228036    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:50.228039    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:50.228043    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:50.228206    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:50.228417    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:50.725459    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:50.725481    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:50.725493    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:50.725526    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:50.728081    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:50.728097    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:50.728107    5434 round_trippers.go:580]     Audit-Id: 360b6392-fd29-4993-91b3-487e9f6775b0
	I0806 00:57:50.728115    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:50.728121    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:50.728126    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:50.728130    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:50.728133    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:50 GMT
	I0806 00:57:50.728374    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:51.224483    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:51.224588    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:51.224615    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:51.224619    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:51.227243    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:51.227254    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:51.227261    5434 round_trippers.go:580]     Audit-Id: b7499a85-98b8-4b7c-9a4f-31a58d18da1c
	I0806 00:57:51.227267    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:51.227270    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:51.227274    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:51.227279    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:51.227283    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:51 GMT
	I0806 00:57:51.227379    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:51.725230    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:51.725250    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:51.725261    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:51.725267    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:51.727905    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:51.727919    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:51.727926    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:51.727930    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:51.727933    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:51 GMT
	I0806 00:57:51.727937    5434 round_trippers.go:580]     Audit-Id: 9ea6e03d-81f1-44a3-89cf-bafa126532f8
	I0806 00:57:51.727941    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:51.727944    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:51.728077    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:52.224921    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:52.224946    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:52.224960    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:52.224967    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:52.227837    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:52.227852    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:52.227860    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:52.227864    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:52.227869    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:52 GMT
	I0806 00:57:52.227873    5434 round_trippers.go:580]     Audit-Id: 98fca0cf-aa76-4ea4-8522-7cd39d623570
	I0806 00:57:52.227877    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:52.227881    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:52.227942    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:52.725013    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:52.725037    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:52.725048    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:52.725060    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:52.727753    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:52.727771    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:52.727778    5434 round_trippers.go:580]     Audit-Id: 46dd3481-b051-4ab0-ae1c-95d8b0e02e35
	I0806 00:57:52.727788    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:52.727795    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:52.727800    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:52.727805    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:52.727810    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:52 GMT
	I0806 00:57:52.728341    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:52.728586    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:53.224625    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:53.224637    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:53.224643    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:53.224646    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:53.226506    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:53.226519    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:53.226524    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:53.226528    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:53.226531    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:53.226533    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:53.226535    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:53 GMT
	I0806 00:57:53.226537    5434 round_trippers.go:580]     Audit-Id: da8e8899-47b8-4b35-941d-db363ee18d6e
	I0806 00:57:53.226637    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:53.726051    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:53.726151    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:53.726167    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:53.726173    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:53.729063    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:53.729077    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:53.729085    5434 round_trippers.go:580]     Audit-Id: 3982aeac-6867-49a8-b6e8-84fffb7dbf4b
	I0806 00:57:53.729089    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:53.729092    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:53.729097    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:53.729100    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:53.729124    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:53 GMT
	I0806 00:57:53.729213    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:54.224926    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:54.224947    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:54.224960    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:54.224967    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:54.227074    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:54.227086    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:54.227096    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:54.227105    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:54.227112    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:54 GMT
	I0806 00:57:54.227118    5434 round_trippers.go:580]     Audit-Id: 8e5bfa48-c622-444e-aebd-1b2b8f7bfcaf
	I0806 00:57:54.227124    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:54.227128    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:54.227399    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:54.725678    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:54.725704    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:54.725717    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:54.725723    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:54.728089    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:54.728109    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:54.728121    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:54 GMT
	I0806 00:57:54.728127    5434 round_trippers.go:580]     Audit-Id: affee715-d54b-43e1-be9c-f0298de6368d
	I0806 00:57:54.728134    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:54.728138    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:54.728173    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:54.728182    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:54.728325    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:55.224533    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:55.224561    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:55.224607    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:55.224632    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:55.227613    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:55.227630    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:55.227638    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:55.227644    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:55 GMT
	I0806 00:57:55.227648    5434 round_trippers.go:580]     Audit-Id: fa358674-c49e-4645-b893-642d10c9b29b
	I0806 00:57:55.227651    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:55.227655    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:55.227658    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:55.227814    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:55.228025    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:55.724444    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:55.724466    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:55.724479    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:55.724485    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:55.726961    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:55.726977    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:55.726984    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:55.726988    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:55.727002    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:55 GMT
	I0806 00:57:55.727010    5434 round_trippers.go:580]     Audit-Id: bf6537d8-989b-4773-b7c1-952ad3e3597f
	I0806 00:57:55.727016    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:55.727020    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:55.727307    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:56.224462    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:56.224483    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:56.224495    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:56.224501    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:56.226706    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:56.226720    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:56.226728    5434 round_trippers.go:580]     Audit-Id: 40a6c958-d47f-4c5f-b662-81f57d85e731
	I0806 00:57:56.226732    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:56.226735    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:56.226738    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:56.226741    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:56.226745    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:56 GMT
	I0806 00:57:56.226812    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:56.724406    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:56.724428    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:56.724437    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:56.724443    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:56.726996    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:56.727008    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:56.727015    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:56.727080    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:56.727095    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:56.727101    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:56.727103    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:56 GMT
	I0806 00:57:56.727107    5434 round_trippers.go:580]     Audit-Id: f0ff7515-3524-4187-b078-2f0438d10e89
	I0806 00:57:56.727208    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:57.225306    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:57.225394    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:57.225409    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:57.225416    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:57.227979    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:57.227995    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:57.228003    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:57.228007    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:57.228010    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:57.228014    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:57 GMT
	I0806 00:57:57.228019    5434 round_trippers.go:580]     Audit-Id: df809995-94a1-4c0c-a430-17d60c9e7015
	I0806 00:57:57.228032    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:57.228205    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:57.228424    5434 node_ready.go:53] node "multinode-100000-m02" has status "Ready":"False"
	I0806 00:57:57.726019    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:57.726046    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:57.726059    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:57.726067    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:57.728773    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:57.728793    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:57.728800    5434 round_trippers.go:580]     Audit-Id: d753c800-6ae4-4a08-bbfe-c56afc9035aa
	I0806 00:57:57.728805    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:57.728819    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:57.728824    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:57.728828    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:57.728831    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:57 GMT
	I0806 00:57:57.728909    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:58.225669    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:58.225694    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.225705    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.225710    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.228253    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.228269    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.228276    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.228281    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.228285    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.228288    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.228293    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.228298    5434 round_trippers.go:580]     Audit-Id: f4a05e02-5c66-49dc-a953-8ed50c8e8f68
	I0806 00:57:58.228382    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1725","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3397 chars]
	I0806 00:57:58.725444    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:58.725478    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.725557    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.725567    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.728312    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.728327    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.728334    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.728338    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.728343    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.728346    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.728350    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.728354    5434 round_trippers.go:580]     Audit-Id: e8e19a9c-bef2-4fd3-85fd-8c3d4d539afc
	I0806 00:57:58.728437    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1736","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3263 chars]
	I0806 00:57:58.728653    5434 node_ready.go:49] node "multinode-100000-m02" has status "Ready":"True"
	I0806 00:57:58.728664    5434 node_ready.go:38] duration metric: took 20.0045333s for node "multinode-100000-m02" to be "Ready" ...
	I0806 00:57:58.728672    5434 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:57:58.728719    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0806 00:57:58.728726    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.728733    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.728738    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.731131    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.731143    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.731150    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.731153    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.731157    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.731160    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.731164    5434 round_trippers.go:580]     Audit-Id: 1ea7890c-0d77-4a59-85dc-877fe634a3fd
	I0806 00:57:58.731166    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.732128    5434 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1737"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86448 chars]
	I0806 00:57:58.734016    5434 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.734058    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-snf8h
	I0806 00:57:58.734063    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.734069    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.734074    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.735821    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.735830    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.735835    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.735839    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.735842    5434 round_trippers.go:580]     Audit-Id: 49535de9-ee43-41e0-a2b4-1e5382858f98
	I0806 00:57:58.735850    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.735854    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.735857    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.736046    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-snf8h","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"80bd44de-6f91-4e47-8832-a66b3c64808d","resourceVersion":"1561","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"7b992694-3bd4-4be8-bcbe-36b2f0238957","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b992694-3bd4-4be8-bcbe-36b2f0238957\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0806 00:57:58.736293    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.736300    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.736305    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.736309    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.737613    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.737620    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.737625    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.737636    5434 round_trippers.go:580]     Audit-Id: 1b9b523d-45a4-446f-a983-3cb9d55c7523
	I0806 00:57:58.737640    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.737643    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.737647    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.737650    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.737906    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.738086    5434 pod_ready.go:92] pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.738094    5434 pod_ready.go:81] duration metric: took 4.068285ms for pod "coredns-7db6d8ff4d-snf8h" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.738102    5434 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.738134    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-100000
	I0806 00:57:58.738138    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.738144    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.738147    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.739543    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.739550    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.739560    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.739564    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.739568    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.739571    5434 round_trippers.go:580]     Audit-Id: efe03bb4-347e-4ec8-9d1a-5b437bd243dd
	I0806 00:57:58.739575    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.739581    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.739777    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-100000","namespace":"kube-system","uid":"227ab7d9-399e-4151-bee7-1520182e38fe","resourceVersion":"1536","creationTimestamp":"2024-08-06T07:37:59Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.mirror":"4d956ffcd8bdef6a75a3174d9c9d792c","kubernetes.io/config.seen":"2024-08-06T07:37:55.730523562Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:37:59Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0806 00:57:58.739989    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.739995    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.740001    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.740004    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.740979    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.740986    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.740990    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.740995    5434 round_trippers.go:580]     Audit-Id: 20653bf5-4fc8-450b-9f45-63adf08d2e0a
	I0806 00:57:58.740999    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.741004    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.741009    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.741014    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.741192    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.741360    5434 pod_ready.go:92] pod "etcd-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.741368    5434 pod_ready.go:81] duration metric: took 3.260646ms for pod "etcd-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.741378    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.741405    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-100000
	I0806 00:57:58.741410    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.741415    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.741419    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.742478    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:58.742487    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.742492    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.742496    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.742502    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.742507    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.742511    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.742514    5434 round_trippers.go:580]     Audit-Id: e0b1089a-46ca-44cf-8422-945411302001
	I0806 00:57:58.742620    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-100000","namespace":"kube-system","uid":"ce1dee9b-5f30-49a9-9066-7faf5f65c4d3","resourceVersion":"1538","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.mirror":"7812fbdfd4f741d8b504bcb30d9268c5","kubernetes.io/config.seen":"2024-08-06T07:38:00.425843150Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0806 00:57:58.742857    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.742864    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.742870    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.742874    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.743732    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.743739    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.743744    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.743747    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.743751    5434 round_trippers.go:580]     Audit-Id: 78fc653a-6937-4ab0-a3db-331f4cec6452
	I0806 00:57:58.743754    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.743756    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.743759    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.743860    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.744027    5434 pod_ready.go:92] pod "kube-apiserver-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.744034    5434 pod_ready.go:81] duration metric: took 2.65106ms for pod "kube-apiserver-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.744040    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.744064    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-100000
	I0806 00:57:58.744068    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.744073    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.744077    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.744958    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.744964    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.744969    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.744973    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.744975    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.744979    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.744981    5434 round_trippers.go:580]     Audit-Id: de2c1c4a-2a16-4d10-a066-0020fb4f576d
	I0806 00:57:58.744984    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.745293    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-100000","namespace":"kube-system","uid":"cefe88fb-c337-47c3-b4f2-acdadde539f2","resourceVersion":"1546","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.mirror":"0ae29164078dfb7d8ac7d5a935c4d875","kubernetes.io/config.seen":"2024-08-06T07:38:00.425770816Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0806 00:57:58.745511    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:58.745517    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.745523    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.745526    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.746444    5434 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0806 00:57:58.746451    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.746456    5434 round_trippers.go:580]     Audit-Id: 7be2dd48-70e6-4ea9-9d1c-69641cba744b
	I0806 00:57:58.746459    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.746466    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.746470    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.746472    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.746474    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:58 GMT
	I0806 00:57:58.746583    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:58.746739    5434 pod_ready.go:92] pod "kube-controller-manager-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:58.746746    5434 pod_ready.go:81] duration metric: took 2.701845ms for pod "kube-controller-manager-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.746755    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:58.926258    5434 request.go:629] Waited for 179.453205ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:57:58.926397    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-crsrr
	I0806 00:57:58.926409    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:58.926418    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:58.926434    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:58.929124    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:58.929141    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:58.929148    5434 round_trippers.go:580]     Audit-Id: 93e73643-103a-4b8c-9b99-e2cf305ff493
	I0806 00:57:58.929153    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:58.929157    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:58.929161    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:58.929164    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:58.929177    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:58.929292    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-crsrr","generateName":"kube-proxy-","namespace":"kube-system","uid":"f72beca3-9601-4aad-b3ba-33f8de5db052","resourceVersion":"1541","creationTimestamp":"2024-08-06T07:38:14Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:14Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0806 00:57:59.126390    5434 request.go:629] Waited for 196.764666ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:59.126455    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:57:59.126461    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.126467    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.126471    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.128204    5434 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0806 00:57:59.128213    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.128217    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.128221    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.128224    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.128227    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.128230    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.128232    5434 round_trippers.go:580]     Audit-Id: 374278f2-b28e-4d4e-aec4-37bf681a998b
	I0806 00:57:59.128407    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:57:59.128605    5434 pod_ready.go:92] pod "kube-proxy-crsrr" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:59.128614    5434 pod_ready.go:81] duration metric: took 381.847235ms for pod "kube-proxy-crsrr" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.128621    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.326716    5434 request.go:629] Waited for 198.050124ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:57:59.326803    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d9c42
	I0806 00:57:59.326813    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.326824    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.326836    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.329406    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.329426    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.329437    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.329450    5434 round_trippers.go:580]     Audit-Id: 978731a9-98d4-4f40-9430-2e4146495769
	I0806 00:57:59.329456    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.329462    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.329467    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.329473    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.329660    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-d9c42","generateName":"kube-proxy-","namespace":"kube-system","uid":"fe685526-4722-4113-b2b3-9a84182541b7","resourceVersion":"1590","creationTimestamp":"2024-08-06T07:52:07Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:52:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0806 00:57:59.526645    5434 request.go:629] Waited for 196.624715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:57:59.526768    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m03
	I0806 00:57:59.526778    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.526790    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.526800    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.529319    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.529335    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.529346    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.529377    5434 round_trippers.go:580]     Audit-Id: 878f200e-3aba-4dd6-be77-d05bbbbb5647
	I0806 00:57:59.529388    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.529391    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.529394    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.529398    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.529479    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m03","uid":"3008e7de-9d1d-41e0-b794-0ab4c70ffeba","resourceVersion":"1602","creationTimestamp":"2024-08-06T07:53:13Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_53_13_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:53:13Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4567 chars]
	I0806 00:57:59.529736    5434 pod_ready.go:97] node "multinode-100000-m03" hosting pod "kube-proxy-d9c42" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000-m03" has status "Ready":"Unknown"
	I0806 00:57:59.529751    5434 pod_ready.go:81] duration metric: took 401.118154ms for pod "kube-proxy-d9c42" in "kube-system" namespace to be "Ready" ...
	E0806 00:57:59.529759    5434 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-100000-m03" hosting pod "kube-proxy-d9c42" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-100000-m03" has status "Ready":"Unknown"
	I0806 00:57:59.529765    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xgwwm" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.726474    5434 request.go:629] Waited for 196.559763ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xgwwm
	I0806 00:57:59.726524    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xgwwm
	I0806 00:57:59.726532    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.726546    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.726556    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.729225    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.729241    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.729248    5434 round_trippers.go:580]     Audit-Id: f23d3e3e-304b-4e92-a2d6-49b4b22f01ce
	I0806 00:57:59.729252    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.729255    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.729261    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.729264    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.729267    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:57:59 GMT
	I0806 00:57:59.729379    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-xgwwm","generateName":"kube-proxy-","namespace":"kube-system","uid":"f4cdef35-1817-4fab-a6a2-0141da3bb973","resourceVersion":"1714","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aeb7868a-2175-4480-b58d-3eb9a593c884","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aeb7868a-2175-4480-b58d-3eb9a593c884\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0806 00:57:59.926618    5434 request.go:629] Waited for 196.817351ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:59.926675    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000-m02
	I0806 00:57:59.926685    5434 round_trippers.go:469] Request Headers:
	I0806 00:57:59.926694    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:57:59.926700    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:57:59.929231    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:57:59.929249    5434 round_trippers.go:577] Response Headers:
	I0806 00:57:59.929260    5434 round_trippers.go:580]     Audit-Id: 48142590-da71-4404-bedb-b74b0430c085
	I0806 00:57:59.929267    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:57:59.929271    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:57:59.929274    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:57:59.929279    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:57:59.929286    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:57:59.929372    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000-m02","uid":"c7581c7c-3fcb-40c9-9891-bef15ff45b0c","resourceVersion":"1741","creationTimestamp":"2024-08-06T07:57:38Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_06T00_57_38_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:57:38Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3143 chars]
	I0806 00:57:59.929584    5434 pod_ready.go:92] pod "kube-proxy-xgwwm" in "kube-system" namespace has status "Ready":"True"
	I0806 00:57:59.929594    5434 pod_ready.go:81] duration metric: took 399.813937ms for pod "kube-proxy-xgwwm" in "kube-system" namespace to be "Ready" ...
	I0806 00:57:59.929602    5434 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:58:00.126190    5434 request.go:629] Waited for 196.539569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:58:00.126333    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-100000
	I0806 00:58:00.126349    5434 round_trippers.go:469] Request Headers:
	I0806 00:58:00.126361    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:58:00.126373    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:58:00.128989    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:58:00.129010    5434 round_trippers.go:577] Response Headers:
	I0806 00:58:00.129018    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:58:00.129024    5434 round_trippers.go:580]     Audit-Id: 90a9b5e2-2d0e-4d7c-9e64-8e2b04889f34
	I0806 00:58:00.129028    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:58:00.129031    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:58:00.129035    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:58:00.129040    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:58:00.129165    5434 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-100000","namespace":"kube-system","uid":"773d7bde-86f3-4e9d-b4aa-67ca3b345180","resourceVersion":"1547","creationTimestamp":"2024-08-06T07:38:00Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.mirror":"4d38f57d568be838072abd789adb44b9","kubernetes.io/config.seen":"2024-08-06T07:38:00.425836810Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-06T07:38:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0806 00:58:00.326311    5434 request.go:629] Waited for 196.774304ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:58:00.326376    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-100000
	I0806 00:58:00.326385    5434 round_trippers.go:469] Request Headers:
	I0806 00:58:00.326396    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:58:00.326403    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:58:00.328549    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:58:00.328562    5434 round_trippers.go:577] Response Headers:
	I0806 00:58:00.328569    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:58:00.328574    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:58:00.328578    5434 round_trippers.go:580]     Audit-Id: 9a1da674-12e1-4deb-a889-e64176873f6e
	I0806 00:58:00.328583    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:58:00.328587    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:58:00.328593    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:58:00.328751    5434 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-06T07:37:58Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0806 00:58:00.329008    5434 pod_ready.go:92] pod "kube-scheduler-multinode-100000" in "kube-system" namespace has status "Ready":"True"
	I0806 00:58:00.329019    5434 pod_ready.go:81] duration metric: took 399.403763ms for pod "kube-scheduler-multinode-100000" in "kube-system" namespace to be "Ready" ...
	I0806 00:58:00.329029    5434 pod_ready.go:38] duration metric: took 1.600314666s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 00:58:00.329048    5434 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 00:58:00.329107    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:58:00.339908    5434 system_svc.go:56] duration metric: took 10.859363ms WaitForService to wait for kubelet
	I0806 00:58:00.339921    5434 kubeadm.go:582] duration metric: took 21.805335392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 00:58:00.339933    5434 node_conditions.go:102] verifying NodePressure condition ...
	I0806 00:58:00.526323    5434 request.go:629] Waited for 186.348415ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0806 00:58:00.526414    5434 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0806 00:58:00.526423    5434 round_trippers.go:469] Request Headers:
	I0806 00:58:00.526431    5434 round_trippers.go:473]     Accept: application/json, */*
	I0806 00:58:00.526437    5434 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0806 00:58:00.528924    5434 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0806 00:58:00.528939    5434 round_trippers.go:577] Response Headers:
	I0806 00:58:00.528945    5434 round_trippers.go:580]     Audit-Id: c8982dd1-c794-4084-92fd-0f0afa65b0cf
	I0806 00:58:00.528948    5434 round_trippers.go:580]     Cache-Control: no-cache, private
	I0806 00:58:00.528951    5434 round_trippers.go:580]     Content-Type: application/json
	I0806 00:58:00.528953    5434 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 83065605-d6b4-4bd9-8a46-749053108704
	I0806 00:58:00.528956    5434 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: e6ba97b0-2015-4cd5-8f1c-06ffde748431
	I0806 00:58:00.528959    5434 round_trippers.go:580]     Date: Tue, 06 Aug 2024 07:58:00 GMT
	I0806 00:58:00.529101    5434 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1741"},"items":[{"metadata":{"name":"multinode-100000","uid":"c31e3731-de36-43ee-aa76-aa45855b148f","resourceVersion":"1566","creationTimestamp":"2024-08-06T07:37:58Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-100000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"e92cb06692f5ea1ba801d10d148e5e92e807f9c8","minikube.k8s.io/name":"multinode-100000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_06T00_38_01_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14922 chars]
	I0806 00:58:00.529494    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:58:00.529503    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:58:00.529510    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:58:00.529513    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:58:00.529516    5434 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 00:58:00.529519    5434 node_conditions.go:123] node cpu capacity is 2
	I0806 00:58:00.529527    5434 node_conditions.go:105] duration metric: took 189.582536ms to run NodePressure ...
	I0806 00:58:00.529536    5434 start.go:241] waiting for startup goroutines ...
	I0806 00:58:00.529554    5434 start.go:255] writing updated cluster config ...
	I0806 00:58:00.551443    5434 out.go:177] 
	I0806 00:58:00.572784    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:58:00.572913    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:58:00.594886    5434 out.go:177] * Starting "multinode-100000-m03" worker node in "multinode-100000" cluster
	I0806 00:58:00.653191    5434 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:58:00.653231    5434 cache.go:56] Caching tarball of preloaded images
	I0806 00:58:00.653469    5434 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 00:58:00.653489    5434 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:58:00.653617    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:58:00.654431    5434 start.go:360] acquireMachinesLock for multinode-100000-m03: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 00:58:00.654556    5434 start.go:364] duration metric: took 100.571µs to acquireMachinesLock for "multinode-100000-m03"
	I0806 00:58:00.654582    5434 start.go:96] Skipping create...Using existing machine configuration
	I0806 00:58:00.654590    5434 fix.go:54] fixHost starting: m03
	I0806 00:58:00.655008    5434 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:58:00.655043    5434 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:58:00.664436    5434 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53133
	I0806 00:58:00.664809    5434 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:58:00.665188    5434 main.go:141] libmachine: Using API Version  1
	I0806 00:58:00.665205    5434 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:58:00.665431    5434 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:58:00.665577    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:00.665664    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetState
	I0806 00:58:00.665753    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:58:00.665841    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5220
	I0806 00:58:00.666774    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid 5220 missing from process table
	I0806 00:58:00.666801    5434 fix.go:112] recreateIfNeeded on multinode-100000-m03: state=Stopped err=<nil>
	I0806 00:58:00.666809    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	W0806 00:58:00.666891    5434 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 00:58:00.687912    5434 out.go:177] * Restarting existing hyperkit VM for "multinode-100000-m03" ...
	I0806 00:58:00.730015    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .Start
	I0806 00:58:00.730258    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:58:00.730287    5434 main.go:141] libmachine: (multinode-100000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid
	I0806 00:58:00.731560    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid 5220 missing from process table
	I0806 00:58:00.731574    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | pid 5220 is in state "Stopped"
	I0806 00:58:00.731586    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid...
	I0806 00:58:00.731958    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Using UUID 83a9a765-665a-44ea-930f-df1a6331c821
	I0806 00:58:00.756417    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Generated MAC 4e:ad:42:3:c5:ed
	I0806 00:58:00.756443    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000
	I0806 00:58:00.756606    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"83a9a765-665a-44ea-930f-df1a6331c821", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:58:00.756641    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"83a9a765-665a-44ea-930f-df1a6331c821", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000383590)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pr
ocess:(*os.Process)(nil)}
	I0806 00:58:00.756701    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "83a9a765-665a-44ea-930f-df1a6331c821", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/multinode-100000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage,/Users/jenkins
/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"}
	I0806 00:58:00.756764    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 83a9a765-665a-44ea-930f-df1a6331c821 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/multinode-100000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-1
00000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-100000"
	I0806 00:58:00.756783    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 00:58:00.758162    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 DEBUG: hyperkit: Pid is 5554
	I0806 00:58:00.758623    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Attempt 0
	I0806 00:58:00.758640    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:58:00.758688    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | hyperkit pid from json: 5554
	I0806 00:58:00.760393    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Searching for 4e:ad:42:3:c5:ed in /var/db/dhcpd_leases ...
	I0806 00:58:00.760484    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0806 00:58:00.760502    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b32880}
	I0806 00:58:00.760521    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b32856}
	I0806 00:58:00.760533    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b327da}
	I0806 00:58:00.760543    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | Found match: 4e:ad:42:3:c5:ed
	I0806 00:58:00.760555    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | IP: 192.169.0.15
	I0806 00:58:00.760564    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetConfigRaw
	I0806 00:58:00.761447    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:58:00.761615    5434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/multinode-100000/config.json ...
	I0806 00:58:00.762092    5434 machine.go:94] provisionDockerMachine start ...
	I0806 00:58:00.762103    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:00.762222    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:00.762317    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:00.762411    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:00.762496    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:00.762578    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:00.762708    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:00.762879    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:00.762886    5434 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 00:58:00.766147    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 00:58:00.775784    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 00:58:00.776767    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:58:00.776783    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:58:00.776790    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:58:00.776797    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:00 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:58:01.161002    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 00:58:01.161025    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 00:58:01.275830    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 00:58:01.275850    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 00:58:01.275864    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 00:58:01.275870    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 00:58:01.276688    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 00:58:01.276698    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:01 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 00:58:06.885456    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 00:58:06.885612    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 00:58:06.885621    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 00:58:06.909022    5434 main.go:141] libmachine: (multinode-100000-m03) DBG | 2024/08/06 00:58:06 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 00:58:11.833120    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0806 00:58:11.833138    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetMachineName
	I0806 00:58:11.833293    5434 buildroot.go:166] provisioning hostname "multinode-100000-m03"
	I0806 00:58:11.833303    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetMachineName
	I0806 00:58:11.833405    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:11.833498    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:11.833582    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.833689    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.833790    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:11.833911    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:11.834050    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:11.834059    5434 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-100000-m03 && echo "multinode-100000-m03" | sudo tee /etc/hostname
	I0806 00:58:11.909385    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-100000-m03
	
	I0806 00:58:11.909402    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:11.909532    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:11.909633    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.909726    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:11.909812    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:11.909927    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:11.910056    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:11.910068    5434 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-100000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-100000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-100000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 00:58:11.978753    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 00:58:11.978769    5434 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 00:58:11.978782    5434 buildroot.go:174] setting up certificates
	I0806 00:58:11.978788    5434 provision.go:84] configureAuth start
	I0806 00:58:11.978795    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetMachineName
	I0806 00:58:11.978927    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:58:11.979051    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:11.979145    5434 provision.go:143] copyHostCerts
	I0806 00:58:11.979173    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:58:11.979233    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 00:58:11.979238    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 00:58:11.979398    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 00:58:11.979584    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:58:11.979625    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 00:58:11.979630    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 00:58:11.979731    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 00:58:11.979873    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:58:11.979922    5434 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 00:58:11.979926    5434 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 00:58:11.980034    5434 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 00:58:11.980181    5434 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.multinode-100000-m03 san=[127.0.0.1 192.169.0.15 localhost minikube multinode-100000-m03]
	I0806 00:58:12.212453    5434 provision.go:177] copyRemoteCerts
	I0806 00:58:12.212501    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 00:58:12.212516    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.212656    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.212773    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.212873    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.212983    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:12.250946    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0806 00:58:12.251023    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 00:58:12.270862    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0806 00:58:12.270931    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0806 00:58:12.290936    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0806 00:58:12.291014    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 00:58:12.310605    5434 provision.go:87] duration metric: took 331.803225ms to configureAuth
	I0806 00:58:12.310617    5434 buildroot.go:189] setting minikube options for container-runtime
	I0806 00:58:12.310775    5434 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:58:12.310788    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:12.310925    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.311026    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.311114    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.311207    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.311295    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.311399    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:12.311527    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:12.311534    5434 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 00:58:12.373876    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 00:58:12.373889    5434 buildroot.go:70] root file system type: tmpfs
	I0806 00:58:12.373965    5434 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 00:58:12.373978    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.374107    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.374195    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.374282    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.374384    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.374498    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:12.374639    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:12.374689    5434 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	Environment="NO_PROXY=192.169.0.13,192.169.0.14"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 00:58:12.450794    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	Environment=NO_PROXY=192.169.0.13,192.169.0.14
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 00:58:12.450811    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:12.450945    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:12.451041    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.451129    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:12.451221    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:12.451348    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:12.451495    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:12.451508    5434 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 00:58:14.021658    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 00:58:14.021673    5434 machine.go:97] duration metric: took 13.259312225s to provisionDockerMachine
	I0806 00:58:14.021681    5434 start.go:293] postStartSetup for "multinode-100000-m03" (driver="hyperkit")
	I0806 00:58:14.021689    5434 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 00:58:14.021699    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.021902    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 00:58:14.021916    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:14.022001    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.022086    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.022165    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.022256    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:14.066891    5434 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 00:58:14.071155    5434 command_runner.go:130] > NAME=Buildroot
	I0806 00:58:14.071166    5434 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0806 00:58:14.071170    5434 command_runner.go:130] > ID=buildroot
	I0806 00:58:14.071175    5434 command_runner.go:130] > VERSION_ID=2023.02.9
	I0806 00:58:14.071185    5434 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0806 00:58:14.071374    5434 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 00:58:14.071384    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 00:58:14.071488    5434 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 00:58:14.071680    5434 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 00:58:14.071686    5434 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> /etc/ssl/certs/14372.pem
	I0806 00:58:14.071894    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 00:58:14.082409    5434 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 00:58:14.110037    5434 start.go:296] duration metric: took 88.345962ms for postStartSetup
	I0806 00:58:14.110059    5434 fix.go:56] duration metric: took 13.455205562s for fixHost
	I0806 00:58:14.110075    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:14.110208    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.110294    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.110376    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.110467    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.110593    5434 main.go:141] libmachine: Using SSH client type: native
	I0806 00:58:14.110732    5434 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10e790c0] 0x10e7be20 <nil>  [] 0s} 192.169.0.15 22 <nil> <nil>}
	I0806 00:58:14.110740    5434 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 00:58:14.176032    5434 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722931094.071863234
	
	I0806 00:58:14.176045    5434 fix.go:216] guest clock: 1722931094.071863234
	I0806 00:58:14.176051    5434 fix.go:229] Guest: 2024-08-06 00:58:14.071863234 -0700 PDT Remote: 2024-08-06 00:58:14.110065 -0700 PDT m=+201.367961651 (delta=-38.201766ms)
	I0806 00:58:14.176061    5434 fix.go:200] guest clock delta is within tolerance: -38.201766ms
	I0806 00:58:14.176064    5434 start.go:83] releasing machines lock for "multinode-100000-m03", held for 13.521231837s
	I0806 00:58:14.176080    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.176208    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetIP
	I0806 00:58:14.199487    5434 out.go:177] * Found network options:
	I0806 00:58:14.220730    5434 out.go:177]   - NO_PROXY=192.169.0.13,192.169.0.14
	W0806 00:58:14.242504    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 00:58:14.242537    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:58:14.242557    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.243399    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.243765    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .DriverName
	I0806 00:58:14.243895    5434 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 00:58:14.243942    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	W0806 00:58:14.244079    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	W0806 00:58:14.244143    5434 proxy.go:119] fail to check proxy env: Error ip not in block
	I0806 00:58:14.244153    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.244266    5434 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0806 00:58:14.244310    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHHostname
	I0806 00:58:14.244321    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.244508    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHPort
	I0806 00:58:14.244531    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.244626    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHKeyPath
	I0806 00:58:14.244705    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:14.244803    5434 main.go:141] libmachine: (multinode-100000-m03) Calling .GetSSHUsername
	I0806 00:58:14.244937    5434 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/multinode-100000-m03/id_rsa Username:docker}
	I0806 00:58:14.279699    5434 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0806 00:58:14.279721    5434 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 00:58:14.279776    5434 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 00:58:14.330683    5434 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0806 00:58:14.330728    5434 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0806 00:58:14.330754    5434 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 00:58:14.330765    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:58:14.330862    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:58:14.346086    5434 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0806 00:58:14.346420    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 00:58:14.355635    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 00:58:14.364513    5434 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 00:58:14.364561    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 00:58:14.373312    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:58:14.382133    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 00:58:14.390853    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 00:58:14.399701    5434 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 00:58:14.408827    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 00:58:14.417835    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 00:58:14.426935    5434 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 00:58:14.435957    5434 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 00:58:14.443786    5434 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0806 00:58:14.443882    5434 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 00:58:14.452060    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:58:14.558715    5434 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 00:58:14.578407    5434 start.go:495] detecting cgroup driver to use...
	I0806 00:58:14.578477    5434 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 00:58:14.597572    5434 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0806 00:58:14.598026    5434 command_runner.go:130] > [Unit]
	I0806 00:58:14.598038    5434 command_runner.go:130] > Description=Docker Application Container Engine
	I0806 00:58:14.598047    5434 command_runner.go:130] > Documentation=https://docs.docker.com
	I0806 00:58:14.598052    5434 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0806 00:58:14.598057    5434 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0806 00:58:14.598061    5434 command_runner.go:130] > StartLimitBurst=3
	I0806 00:58:14.598064    5434 command_runner.go:130] > StartLimitIntervalSec=60
	I0806 00:58:14.598067    5434 command_runner.go:130] > [Service]
	I0806 00:58:14.598070    5434 command_runner.go:130] > Type=notify
	I0806 00:58:14.598074    5434 command_runner.go:130] > Restart=on-failure
	I0806 00:58:14.598078    5434 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0806 00:58:14.598083    5434 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13,192.169.0.14
	I0806 00:58:14.598088    5434 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0806 00:58:14.598097    5434 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0806 00:58:14.598103    5434 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0806 00:58:14.598108    5434 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0806 00:58:14.598114    5434 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0806 00:58:14.598119    5434 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0806 00:58:14.598128    5434 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0806 00:58:14.598134    5434 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0806 00:58:14.598139    5434 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0806 00:58:14.598142    5434 command_runner.go:130] > ExecStart=
	I0806 00:58:14.598153    5434 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0806 00:58:14.598159    5434 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0806 00:58:14.598171    5434 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0806 00:58:14.598177    5434 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0806 00:58:14.598183    5434 command_runner.go:130] > LimitNOFILE=infinity
	I0806 00:58:14.598187    5434 command_runner.go:130] > LimitNPROC=infinity
	I0806 00:58:14.598190    5434 command_runner.go:130] > LimitCORE=infinity
	I0806 00:58:14.598195    5434 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0806 00:58:14.598199    5434 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0806 00:58:14.598203    5434 command_runner.go:130] > TasksMax=infinity
	I0806 00:58:14.598206    5434 command_runner.go:130] > TimeoutStartSec=0
	I0806 00:58:14.598212    5434 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0806 00:58:14.598215    5434 command_runner.go:130] > Delegate=yes
	I0806 00:58:14.598224    5434 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0806 00:58:14.598227    5434 command_runner.go:130] > KillMode=process
	I0806 00:58:14.598230    5434 command_runner.go:130] > [Install]
	I0806 00:58:14.598234    5434 command_runner.go:130] > WantedBy=multi-user.target
	I0806 00:58:14.598413    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:58:14.613420    5434 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 00:58:14.629701    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 00:58:14.640859    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:58:14.651379    5434 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 00:58:14.673305    5434 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 00:58:14.683771    5434 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 00:58:14.698544    5434 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0806 00:58:14.698791    5434 ssh_runner.go:195] Run: which cri-dockerd
	I0806 00:58:14.701580    5434 command_runner.go:130] > /usr/bin/cri-dockerd
	I0806 00:58:14.701750    5434 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 00:58:14.708820    5434 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 00:58:14.722421    5434 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 00:58:14.815094    5434 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 00:58:14.921962    5434 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 00:58:14.921985    5434 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 00:58:14.935838    5434 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 00:58:15.032162    5434 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 00:59:15.915566    5434 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0806 00:59:15.915581    5434 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0806 00:59:15.915774    5434 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.88239916s)
	I0806 00:59:15.915839    5434 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 00:59:15.924740    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:59:15.924752    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620205375Z" level=info msg="Starting up"
	I0806 00:59:15.924760    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620885359Z" level=info msg="containerd not running, starting managed containerd"
	I0806 00:59:15.924774    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.621523310Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	I0806 00:59:15.924784    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.640436395Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0806 00:59:15.924794    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.655975062Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0806 00:59:15.924809    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656077313Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0806 00:59:15.924819    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656226951Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0806 00:59:15.924828    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656271270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924839    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656455891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924848    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656499131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924867    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656643262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924875    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656684025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924886    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656715615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924896    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656749714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924907    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656891585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924916    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.657087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.924938    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658771254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.924963    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658832185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0806 00:59:15.925011    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658977673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0806 00:59:15.925024    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659023792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0806 00:59:15.925034    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659168691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0806 00:59:15.925042    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659277517Z" level=info msg="metadata content store policy set" policy=shared
	I0806 00:59:15.925051    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660551911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0806 00:59:15.925060    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660601241Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0806 00:59:15.925068    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660615925Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0806 00:59:15.925078    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660625942Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0806 00:59:15.925086    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660642532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0806 00:59:15.925095    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660696000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0806 00:59:15.925104    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660982518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0806 00:59:15.925115    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661131769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0806 00:59:15.925124    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661166301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0806 00:59:15.925135    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661177824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0806 00:59:15.925145    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661187825Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925154    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661196606Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925163    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661205267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925172    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661214886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925181    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661224353Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925190    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661232684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925473    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661240709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925484    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661248870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0806 00:59:15.925495    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661261839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925507    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661281648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925515    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661292789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925524    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661307256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925533    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661319953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925541    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661328979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925549    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661337898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925558    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661346271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925567    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661354564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925575    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661363681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925583    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661371351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925592    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661378844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925601    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661386749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925612    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661396961Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0806 00:59:15.925621    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661410260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925630    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661418222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925639    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661426102Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0806 00:59:15.925648    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661470594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0806 00:59:15.925660    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661510559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0806 00:59:15.925671    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661520945Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0806 00:59:15.925747    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661528992Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0806 00:59:15.925759    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661535663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0806 00:59:15.925770    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661714555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0806 00:59:15.925778    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661749667Z" level=info msg="NRI interface is disabled by configuration."
	I0806 00:59:15.925785    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661938092Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0806 00:59:15.925793    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661996010Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0806 00:59:15.925802    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662029246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0806 00:59:15.925809    5434 command_runner.go:130] > Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662061316Z" level=info msg="containerd successfully booted in 0.022501s"
	I0806 00:59:15.925818    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.642985611Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0806 00:59:15.925825    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.656390226Z" level=info msg="Loading containers: start."
	I0806 00:59:15.925843    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.773927440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0806 00:59:15.925854    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.836164993Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0806 00:59:15.925866    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881102509Z" level=warning msg="error locating sandbox id 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1: sandbox 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1 not found"
	I0806 00:59:15.925876    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881237996Z" level=info msg="Loading containers: done."
	I0806 00:59:15.925885    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888707394Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0806 00:59:15.925893    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888862219Z" level=info msg="Daemon has completed initialization"
	I0806 00:59:15.925908    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.911008448Z" level=info msg="API listen on /var/run/docker.sock"
	I0806 00:59:15.925915    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 systemd[1]: Started Docker Application Container Engine.
	I0806 00:59:15.925923    5434 command_runner.go:130] > Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.913462716Z" level=info msg="API listen on [::]:2376"
	I0806 00:59:15.925930    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.960059248Z" level=info msg="Processing signal 'terminated'"
	I0806 00:59:15.925940    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961027416Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0806 00:59:15.925948    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961153398Z" level=info msg="Daemon shutdown complete"
	I0806 00:59:15.925983    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961241454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0806 00:59:15.925991    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961276079Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0806 00:59:15.925997    5434 command_runner.go:130] > Aug 06 07:58:14 multinode-100000-m03 systemd[1]: Stopping Docker Application Container Engine...
	I0806 00:59:15.926003    5434 command_runner.go:130] > Aug 06 07:58:15 multinode-100000-m03 systemd[1]: docker.service: Deactivated successfully.
	I0806 00:59:15.926009    5434 command_runner.go:130] > Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Stopped Docker Application Container Engine.
	I0806 00:59:15.926015    5434 command_runner.go:130] > Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	I0806 00:59:15.926023    5434 command_runner.go:130] > Aug 06 07:58:16 multinode-100000-m03 dockerd[910]: time="2024-08-06T07:58:16.000826603Z" level=info msg="Starting up"
	I0806 00:59:15.926035    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0806 00:59:15.926044    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0806 00:59:15.926051    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0806 00:59:15.926056    5434 command_runner.go:130] > Aug 06 07:59:16 multinode-100000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	I0806 00:59:15.950293    5434 out.go:177] 
	W0806 00:59:15.971310    5434 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 07:58:12 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620205375Z" level=info msg="Starting up"
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.620885359Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 07:58:12 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:12.621523310Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=494
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.640436395Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.655975062Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656077313Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656226951Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656271270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656455891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656499131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656643262Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656684025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656715615Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656749714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.656891585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.657087147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658771254Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658832185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.658977673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659023792Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659168691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.659277517Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660551911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660601241Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660615925Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660625942Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660642532Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660696000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.660982518Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661131769Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661166301Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661177824Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661187825Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661196606Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661205267Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661214886Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661224353Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661232684Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661240709Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661248870Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661261839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661281648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661292789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661307256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661319953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661328979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661337898Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661346271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661354564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661363681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661371351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661378844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661386749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661396961Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661410260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661418222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661426102Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661470594Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661510559Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661520945Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661528992Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661535663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661714555Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661749667Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661938092Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.661996010Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662029246Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 07:58:12 multinode-100000-m03 dockerd[494]: time="2024-08-06T07:58:12.662061316Z" level=info msg="containerd successfully booted in 0.022501s"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.642985611Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.656390226Z" level=info msg="Loading containers: start."
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.773927440Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.836164993Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881102509Z" level=warning msg="error locating sandbox id 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1: sandbox 5eb4c04c1386508679e66336134c524325a604c101a04a94d158bc8e06676af1 not found"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.881237996Z" level=info msg="Loading containers: done."
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888707394Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.888862219Z" level=info msg="Daemon has completed initialization"
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.911008448Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 07:58:13 multinode-100000-m03 systemd[1]: Started Docker Application Container Engine.
	Aug 06 07:58:13 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:13.913462716Z" level=info msg="API listen on [::]:2376"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.960059248Z" level=info msg="Processing signal 'terminated'"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961027416Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961153398Z" level=info msg="Daemon shutdown complete"
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961241454Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 07:58:14 multinode-100000-m03 dockerd[487]: time="2024-08-06T07:58:14.961276079Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 07:58:14 multinode-100000-m03 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 07:58:15 multinode-100000-m03 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 07:58:16 multinode-100000-m03 dockerd[910]: time="2024-08-06T07:58:16.000826603Z" level=info msg="Starting up"
	Aug 06 07:59:16 multinode-100000-m03 dockerd[910]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 07:59:16 multinode-100000-m03 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 00:59:15.971426    5434 out.go:239] * 
	W0806 00:59:15.972635    5434 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 00:59:16.034481    5434 out.go:177] 
	
	
	==> Docker <==
	Aug 06 07:55:32 multinode-100000 dockerd[917]: time="2024-08-06T07:55:32.735792409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:32 multinode-100000 dockerd[917]: time="2024-08-06T07:55:32.736134361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:32 multinode-100000 dockerd[917]: time="2024-08-06T07:55:32.739012656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:55:32 multinode-100000 dockerd[917]: time="2024-08-06T07:55:32.739071124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:55:32 multinode-100000 dockerd[917]: time="2024-08-06T07:55:32.739084397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:32 multinode-100000 dockerd[917]: time="2024-08-06T07:55:32.739368080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:32 multinode-100000 cri-dockerd[1166]: time="2024-08-06T07:55:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8a9c0f012229adcf0c807db00cf89d607df5e331d042c8f8ac9709ec08ba3bf1/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 06 07:55:32 multinode-100000 cri-dockerd[1166]: time="2024-08-06T07:55:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/71d35e2032e881074bce03b406c798ceca2b288177079b4613c83ed1efdd08f5/resolv.conf as [nameserver 192.169.0.1]"
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.051788371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.051954815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.052023991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.052329403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.079415603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.079479715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.079492048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:33 multinode-100000 dockerd[917]: time="2024-08-06T07:55:33.079641229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:47 multinode-100000 dockerd[911]: time="2024-08-06T07:55:47.420053235Z" level=info msg="ignoring event" container=1657841fe266c266f07d6abb260785bef1a97d49711c6b920d89664be20e8bbc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 07:55:47 multinode-100000 dockerd[917]: time="2024-08-06T07:55:47.420810768Z" level=info msg="shim disconnected" id=1657841fe266c266f07d6abb260785bef1a97d49711c6b920d89664be20e8bbc namespace=moby
	Aug 06 07:55:47 multinode-100000 dockerd[917]: time="2024-08-06T07:55:47.420990026Z" level=warning msg="cleaning up after shim disconnected" id=1657841fe266c266f07d6abb260785bef1a97d49711c6b920d89664be20e8bbc namespace=moby
	Aug 06 07:55:47 multinode-100000 dockerd[917]: time="2024-08-06T07:55:47.421036956Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 07:55:47 multinode-100000 dockerd[917]: time="2024-08-06T07:55:47.432965607Z" level=warning msg="cleanup warnings time=\"2024-08-06T07:55:47Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 07:55:58 multinode-100000 dockerd[917]: time="2024-08-06T07:55:58.897265582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 07:55:58 multinode-100000 dockerd[917]: time="2024-08-06T07:55:58.897377066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 07:55:58 multinode-100000 dockerd[917]: time="2024-08-06T07:55:58.897390204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 07:55:58 multinode-100000 dockerd[917]: time="2024-08-06T07:55:58.897492870Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	19176cd097d4e       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   9c3335d557174       storage-provisioner
	305703cae4339       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   71d35e2032e88       coredns-7db6d8ff4d-snf8h
	552c12b66610a       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   8a9c0f012229a       busybox-fc5497c4f-dzbn7
	bfe8f25d6470f       917d7814b9b5b                                                                                         4 minutes ago       Running             kindnet-cni               1                   5bbdb304998b8       kindnet-g2xk7
	1657841fe266c       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   9c3335d557174       storage-provisioner
	d11dfdd2f1031       55bb025d2cfa5                                                                                         4 minutes ago       Running             kube-proxy                1                   f33c7360f7227       kube-proxy-crsrr
	489c68ae62215       3edc18e7b7672                                                                                         4 minutes ago       Running             kube-scheduler            1                   c31166380bdf2       kube-scheduler-multinode-100000
	11253d2b59201       76932a3b37d7e                                                                                         4 minutes ago       Running             kube-controller-manager   1                   eed711389508a       kube-controller-manager-multinode-100000
	e3c0f426cdda3       1f6d574d502f3                                                                                         4 minutes ago       Running             kube-apiserver            1                   a57dd19f81c1d       kube-apiserver-multinode-100000
	85f6fd8ad8ac7       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      1                   6e484479f3a9f       etcd-multinode-100000
	f4860a1bb0cb9       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   19 minutes ago      Exited              busybox                   0                   730773bd53054       busybox-fc5497c4f-dzbn7
	4a58bc5cb9c3e       cbb01a7bd410d                                                                                         20 minutes ago      Exited              coredns                   0                   ea5bc31c54836       coredns-7db6d8ff4d-snf8h
	ca21c7b20c75e       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              20 minutes ago      Exited              kindnet-cni               0                   731b397a827bd       kindnet-g2xk7
	10a2028447459       55bb025d2cfa5                                                                                         21 minutes ago      Exited              kube-proxy                0                   6bbb2ed0b308f       kube-proxy-crsrr
	09c41cba0052b       3edc18e7b7672                                                                                         21 minutes ago      Exited              kube-scheduler            0                   d20d569460ead       kube-scheduler-multinode-100000
	b60a8dd0efa51       3861cfcd7c04c                                                                                         21 minutes ago      Exited              etcd                      0                   94cf07fa5ddcf       etcd-multinode-100000
	6d93185f30a91       1f6d574d502f3                                                                                         21 minutes ago      Exited              kube-apiserver            0                   bde71375b0e4c       kube-apiserver-multinode-100000
	e6892e6b325e1       76932a3b37d7e                                                                                         21 minutes ago      Exited              kube-controller-manager   0                   8cca7996d392f       kube-controller-manager-multinode-100000
	
	
	==> coredns [305703cae433] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:51155 - 29717 "HINFO IN 830135650408910311.2212646202890528478. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.011697502s
	
	
	==> coredns [4a58bc5cb9c3] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:54441 - 10694 "HINFO IN 5152607944082316412.2643734041882751245. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012399296s
	[INFO] 10.244.0.3:56703 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015252s
	[INFO] 10.244.0.3:42200 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.046026881s
	[INFO] 10.244.0.3:42318 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.01031955s
	[INFO] 10.244.0.3:37586 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.010459799s
	[INFO] 10.244.0.3:58156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135202s
	[INFO] 10.244.0.3:44245 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010537472s
	[INFO] 10.244.0.3:44922 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000150629s
	[INFO] 10.244.0.3:39974 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00013721s
	[INFO] 10.244.0.3:33617 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010347469s
	[INFO] 10.244.0.3:38936 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000154675s
	[INFO] 10.244.0.3:44726 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080983s
	[INFO] 10.244.0.3:41349 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000247413s
	[INFO] 10.244.0.3:54177 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116507s
	[INFO] 10.244.0.3:35929 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000055089s
	[INFO] 10.244.0.3:46361 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000084906s
	[INFO] 10.244.0.3:49686 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000085442s
	[INFO] 10.244.0.3:47333 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0000847s
	[INFO] 10.244.0.3:41915 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000057433s
	[INFO] 10.244.0.3:34860 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071303s
	[INFO] 10.244.0.3:46952 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000111703s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-100000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_06T00_38_01_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:37:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:59:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:55:30 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:55:30 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:55:30 +0000   Tue, 06 Aug 2024 07:37:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:55:30 +0000   Tue, 06 Aug 2024 07:55:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-100000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 cb135da7c75f4b268ff1a3401599ef49
	  System UUID:                9d6d49b5-0000-0000-bb0f-6ea8b6ad2848
	  Boot ID:                    5b893a2a-3375-4621-a0b6-34cde6424e52
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-dzbn7                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-7db6d8ff4d-snf8h                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-multinode-100000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-g2xk7                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-multinode-100000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-multinode-100000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-crsrr                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-multinode-100000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 21m                  kube-proxy       
	  Normal  Starting                 4m                   kube-proxy       
	  Normal  Starting                 21m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)    kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)    kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)    kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m                  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                  node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	  Normal  NodeReady                20m                  kubelet          Node multinode-100000 status is now: NodeReady
	  Normal  Starting                 4m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m6s (x8 over 4m6s)  kubelet          Node multinode-100000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m6s (x8 over 4m6s)  kubelet          Node multinode-100000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m6s (x7 over 4m6s)  kubelet          Node multinode-100000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m49s                node-controller  Node multinode-100000 event: Registered Node multinode-100000 in Controller
	
	
	Name:               multinode-100000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T00_57_38_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:57:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:59:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 06 Aug 2024 07:58:09 +0000   Tue, 06 Aug 2024 07:57:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 06 Aug 2024 07:58:09 +0000   Tue, 06 Aug 2024 07:57:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 06 Aug 2024 07:58:09 +0000   Tue, 06 Aug 2024 07:57:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 06 Aug 2024 07:58:09 +0000   Tue, 06 Aug 2024 07:57:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.14
	  Hostname:    multinode-100000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 b16687bddfa146f5b1ffa02c11b7c5bd
	  System UUID:                11e34a8b-0000-0000-9cb1-968ee3a613d4
	  Boot ID:                    76b049f4-b885-4bb9-b12d-5823520506e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-pw2kr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      100s
	  kube-system                 kube-proxy-xgwwm    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  100s (x2 over 100s)  kubelet          Node multinode-100000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    100s (x2 over 100s)  kubelet          Node multinode-100000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     100s (x2 over 100s)  kubelet          Node multinode-100000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                  node-controller  Node multinode-100000-m02 event: Registered Node multinode-100000-m02 in Controller
	  Normal  NodeReady                80s                  kubelet          Node multinode-100000-m02 status is now: NodeReady
	
	
	Name:               multinode-100000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-100000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8
	                    minikube.k8s.io/name=multinode-100000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_06T00_53_13_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 06 Aug 2024 07:53:13 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-100000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 06 Aug 2024 07:54:24 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:56:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:56:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:56:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Tue, 06 Aug 2024 07:53:27 +0000   Tue, 06 Aug 2024 07:56:09 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-100000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 405631c47c9b4602b8ca253c774af06d
	  System UUID:                83a944ea-0000-0000-930f-df1a6331c821
	  Boot ID:                    bd2884b6-d728-45cc-b651-febbafe6f6e6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-bfsf8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	  kube-system                 kindnet-dn72w              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m11s
	  kube-system                 kube-proxy-d9c42           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  Starting                 6m2s                   kube-proxy       
	  Normal  NodeHasNoDiskPressure    7m11s (x2 over 7m11s)  kubelet          Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s (x2 over 7m11s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m11s (x2 over 7m11s)  kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                6m48s                  kubelet          Node multinode-100000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  6m6s (x2 over 6m6s)    kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x2 over 6m6s)    kubelet          Node multinode-100000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x2 over 6m6s)    kubelet          Node multinode-100000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m51s                  kubelet          Node multinode-100000-m03 status is now: NodeReady
	  Normal  RegisteredNode           3m49s                  node-controller  Node multinode-100000-m03 event: Registered Node multinode-100000-m03 in Controller
	  Normal  NodeNotReady             3m9s                   node-controller  Node multinode-100000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.007757] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.665942] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007017] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Aug 6 07:55] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.234895] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.236748] systemd-fstab-generator[470]: Ignoring "noauto" option for root device
	[  +0.092871] systemd-fstab-generator[482]: Ignoring "noauto" option for root device
	[  +1.885630] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.260848] systemd-fstab-generator[877]: Ignoring "noauto" option for root device
	[  +0.105661] systemd-fstab-generator[889]: Ignoring "noauto" option for root device
	[  +0.111773] systemd-fstab-generator[903]: Ignoring "noauto" option for root device
	[  +2.207478] kauditd_printk_skb: 167 callbacks suppressed
	[  +0.260691] systemd-fstab-generator[1119]: Ignoring "noauto" option for root device
	[  +0.100054] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +0.104697] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device
	[  +0.138221] systemd-fstab-generator[1158]: Ignoring "noauto" option for root device
	[  +0.424003] systemd-fstab-generator[1288]: Ignoring "noauto" option for root device
	[  +1.833813] systemd-fstab-generator[1426]: Ignoring "noauto" option for root device
	[  +4.574571] kauditd_printk_skb: 208 callbacks suppressed
	[  +2.962662] systemd-fstab-generator[2252]: Ignoring "noauto" option for root device
	[  +9.879906] kauditd_printk_skb: 70 callbacks suppressed
	[ +17.354799] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [85f6fd8ad8ac] <==
	{"level":"info","ts":"2024-08-06T07:55:13.806956Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-06T07:55:13.807451Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-08-06T07:55:13.808414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-06T07:55:13.808481Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-06T07:55:13.809066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:55:13.809125Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:55:13.824929Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-06T07:55:13.829267Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:55:13.829303Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:55:13.830379Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-06T07:55:13.83044Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-06T07:55:15.484672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-06T07:55:15.484868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-06T07:55:15.484935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-06T07:55:15.484958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2024-08-06T07:55:15.485032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-08-06T07:55:15.485099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2024-08-06T07:55:15.485261Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-08-06T07:55:15.487941Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-100000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-06T07:55:15.488023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:55:15.488342Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:55:15.489675Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:55:15.490954Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:55:15.491301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:55:15.491349Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [b60a8dd0efa5] <==
	{"level":"info","ts":"2024-08-06T07:37:57.154583Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-06T07:37:57.156332Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.162987Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-06T07:37:57.167336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.167373Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-06T07:37:57.16953Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169589Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:37:57.169719Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-06T07:47:57.219223Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":686}
	{"level":"info","ts":"2024-08-06T07:47:57.221754Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":686,"took":"2.185771ms","hash":4164319908,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1994752,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2024-08-06T07:47:57.221798Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4164319908,"revision":686,"compact-revision":-1}
	{"level":"info","ts":"2024-08-06T07:52:10.269202Z","caller":"traceutil/trace.go:171","msg":"trace[808197773] transaction","detail":"{read_only:false; response_revision:1165; number_of_response:1; }","duration":"104.082235ms","start":"2024-08-06T07:52:10.165072Z","end":"2024-08-06T07:52:10.269154Z","steps":["trace[808197773] 'process raft request'  (duration: 103.999362ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-06T07:52:57.222789Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":926}
	{"level":"info","ts":"2024-08-06T07:52:57.224031Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":926,"took":"926.569µs","hash":3882059122,"current-db-size-bytes":1994752,"current-db-size":"2.0 MB","current-db-size-in-use-bytes":1617920,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-08-06T07:52:57.224093Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3882059122,"revision":926,"compact-revision":686}
	{"level":"info","ts":"2024-08-06T07:54:44.806855Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-06T07:54:44.806891Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-100000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2024-08-06T07:54:44.80696Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:54:44.807019Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:54:44.825475Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-06T07:54:44.825517Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-06T07:54:44.828001Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2024-08-06T07:54:44.829438Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:54:44.82953Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-06T07:54:44.829538Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-100000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	
	==> kernel <==
	 07:59:18 up 4 min,  0 users,  load average: 0.09, 0.17, 0.08
	Linux multinode-100000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [bfe8f25d6470] <==
	I0806 07:58:38.498641       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:58:48.491272       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:58:48.491397       1 main.go:299] handling current node
	I0806 07:58:48.491435       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:58:48.491461       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:58:48.491715       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0806 07:58:48.492033       1 main.go:322] Node multinode-100000-m02 has CIDR [10.244.1.0/24] 
	I0806 07:58:58.500059       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:58:58.500100       1 main.go:299] handling current node
	I0806 07:58:58.500110       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:58:58.500115       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:58:58.500337       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0806 07:58:58.500367       1 main.go:322] Node multinode-100000-m02 has CIDR [10.244.1.0/24] 
	I0806 07:59:08.500273       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0806 07:59:08.500356       1 main.go:322] Node multinode-100000-m02 has CIDR [10.244.1.0/24] 
	I0806 07:59:08.500698       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:59:08.501237       1 main.go:299] handling current node
	I0806 07:59:08.501334       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:59:08.501366       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:59:18.491227       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:59:18.491262       1 main.go:299] handling current node
	I0806 07:59:18.491279       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:59:18.491287       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:59:18.491511       1 main.go:295] Handling node with IPs: map[192.169.0.14:{}]
	I0806 07:59:18.491603       1 main.go:322] Node multinode-100000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [ca21c7b20c75] <==
	I0806 07:53:39.613815       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:49.608376       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:53:49.608547       1 main.go:299] handling current node
	I0806 07:53:49.608588       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:53:49.608686       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:59.615606       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:53:59.615675       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:53:59.615977       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:53:59.616007       1 main.go:299] handling current node
	I0806 07:54:09.616410       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:54:09.616683       1 main.go:299] handling current node
	I0806 07:54:09.616787       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:54:09.616908       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:54:19.608266       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:54:19.608620       1 main.go:299] handling current node
	I0806 07:54:19.608938       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:54:19.609314       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:54:29.610153       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:54:29.610271       1 main.go:299] handling current node
	I0806 07:54:29.610288       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:54:29.610298       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	I0806 07:54:39.609412       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0806 07:54:39.609521       1 main.go:299] handling current node
	I0806 07:54:39.609533       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0806 07:54:39.609538       1 main.go:322] Node multinode-100000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6d93185f30a9] <==
	W0806 07:54:44.814947       1 logging.go:59] [core] [Channel #13 SubChannel #15] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.815012       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.815039       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.815062       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.815085       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.816998       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817026       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817049       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817070       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817092       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817113       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817133       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817156       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817212       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817242       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817266       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817290       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817313       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817337       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.817361       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0806 07:54:44.818759       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0806 07:54:44.837969       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0806 07:54:44.875062       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0806 07:54:44.875330       1 wrap.go:54] timeout or abort while handling: method=GET URI="/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" audit-ID="a54f3a55-5c17-4518-973b-699d052c187f"
	E0806 07:54:44.875352       1 timeout.go:142] post-timeout activity - time-elapsed: 2.451µs, GET "/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result: <nil>
	
	
	==> kube-apiserver [e3c0f426cdda] <==
	I0806 07:55:16.488572       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0806 07:55:16.488792       1 policy_source.go:224] refreshing policies
	I0806 07:55:16.491036       1 shared_informer.go:320] Caches are synced for configmaps
	I0806 07:55:16.491422       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0806 07:55:16.492157       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0806 07:55:16.495519       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0806 07:55:16.495596       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0806 07:55:16.504551       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0806 07:55:16.509385       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0806 07:55:16.510486       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0806 07:55:16.511737       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0806 07:55:16.515900       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0806 07:55:16.516118       1 aggregator.go:165] initial CRD sync complete...
	I0806 07:55:16.516265       1 autoregister_controller.go:141] Starting autoregister controller
	I0806 07:55:16.516384       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0806 07:55:16.516429       1 cache.go:39] Caches are synced for autoregister controller
	E0806 07:55:16.521452       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0806 07:55:17.394242       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0806 07:55:18.422951       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0806 07:55:18.518312       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0806 07:55:18.526052       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0806 07:55:18.558947       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0806 07:55:18.563053       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0806 07:55:30.012835       1 controller.go:615] quota admission added evaluator for: endpoints
	I0806 07:55:30.063966       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [11253d2b5920] <==
	I0806 07:55:29.598942       1 shared_informer.go:320] Caches are synced for expand
	I0806 07:55:29.600743       1 shared_informer.go:320] Caches are synced for ephemeral
	I0806 07:55:29.678793       1 shared_informer.go:320] Caches are synced for disruption
	I0806 07:55:29.694735       1 shared_informer.go:320] Caches are synced for stateful set
	I0806 07:55:29.710289       1 shared_informer.go:320] Caches are synced for HPA
	I0806 07:55:29.747049       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0806 07:55:29.767652       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:55:29.774863       1 shared_informer.go:320] Caches are synced for endpoint
	I0806 07:55:29.780135       1 shared_informer.go:320] Caches are synced for resource quota
	I0806 07:55:29.811641       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0806 07:55:30.185491       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:55:30.207725       1 shared_informer.go:320] Caches are synced for garbage collector
	I0806 07:55:30.207833       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0806 07:55:30.579285       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:55:33.170257       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="29.271µs"
	I0806 07:55:33.186924       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.786363ms"
	I0806 07:55:33.187037       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.999µs"
	I0806 07:55:34.200331       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.473477ms"
	I0806 07:55:34.200620       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="218.021µs"
	I0806 07:56:09.708881       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.46482ms"
	I0806 07:56:09.709067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.079µs"
	I0806 07:57:38.316097       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m02\" does not exist"
	I0806 07:57:38.325306       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m02" podCIDRs=["10.244.1.0/24"]
	I0806 07:57:39.723519       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-100000-m02"
	I0806 07:57:58.639797       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m02"
	
	
	==> kube-controller-manager [e6892e6b325e] <==
	I0806 07:39:55.173384       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.984127ms"
	I0806 07:39:55.173460       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.692µs"
	I0806 07:52:07.325935       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:52:07.342865       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.1.0/24"]
	I0806 07:52:09.851060       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-100000-m03"
	I0806 07:52:30.373055       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:52:30.382873       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.276µs"
	I0806 07:52:30.391038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.602µs"
	I0806 07:52:32.408559       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.578386ms"
	I0806 07:52:32.408616       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.014µs"
	I0806 07:53:09.171154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.139086ms"
	I0806 07:53:09.175196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.978136ms"
	I0806 07:53:09.175804       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.257µs"
	I0806 07:53:13.398407       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-100000-m03\" does not exist"
	I0806 07:53:13.404870       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-100000-m03" podCIDRs=["10.244.2.0/24"]
	I0806 07:53:15.293136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.929µs"
	I0806 07:53:28.554492       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-100000-m03"
	I0806 07:53:28.566261       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.516µs"
	I0806 07:53:38.331842       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.02µs"
	I0806 07:53:38.334824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.081µs"
	I0806 07:53:38.341838       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.995µs"
	I0806 07:53:38.477263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.291µs"
	I0806 07:53:38.479196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="62.295µs"
	I0806 07:53:39.495459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.323598ms"
	I0806 07:53:39.495743       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.517µs"
	
	
	==> kube-proxy [10a202844745] <==
	I0806 07:38:15.590518       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:38:15.601869       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:38:15.662400       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:38:15.662440       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:38:15.662490       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:38:15.664791       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:38:15.664918       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:38:15.664946       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:38:15.665753       1 config.go:192] "Starting service config controller"
	I0806 07:38:15.665783       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:38:15.665799       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:38:15.665822       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:38:15.667388       1 config.go:319] "Starting node config controller"
	I0806 07:38:15.667416       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:38:15.765917       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:38:15.765965       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:38:15.767534       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d11dfdd2f103] <==
	I0806 07:55:17.533244       1 server_linux.go:69] "Using iptables proxy"
	I0806 07:55:17.545046       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0806 07:55:17.615752       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0806 07:55:17.615775       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0806 07:55:17.615808       1 server_linux.go:165] "Using iptables Proxier"
	I0806 07:55:17.618733       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0806 07:55:17.619387       1 server.go:872] "Version info" version="v1.30.3"
	I0806 07:55:17.619665       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:55:17.620814       1 config.go:192] "Starting service config controller"
	I0806 07:55:17.620991       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0806 07:55:17.621126       1 config.go:101] "Starting endpoint slice config controller"
	I0806 07:55:17.621227       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0806 07:55:17.622907       1 config.go:319] "Starting node config controller"
	I0806 07:55:17.623683       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0806 07:55:17.722180       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0806 07:55:17.722502       1 shared_informer.go:320] Caches are synced for service config
	I0806 07:55:17.724485       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [09c41cba0052] <==
	E0806 07:37:58.446242       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:58.446116       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:58.446419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:58.445401       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:58.446582       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:58.446196       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0806 07:37:58.446734       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0806 07:37:59.253603       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0806 07:37:59.253776       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:37:59.282330       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0806 07:37:59.282504       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0806 07:37:59.305407       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0806 07:37:59.305621       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0806 07:37:59.351009       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.351049       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0806 07:37:59.487287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0806 07:37:59.487395       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0806 07:37:59.506883       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0806 07:37:59.506925       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0806 07:37:59.509357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0806 07:37:59.509392       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0806 07:38:01.840667       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:54:44.820440       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0806 07:54:44.820845       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0806 07:54:44.821010       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [489c68ae6221] <==
	I0806 07:55:15.116381       1 serving.go:380] Generated self-signed cert in-memory
	W0806 07:55:16.440954       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0806 07:55:16.441036       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0806 07:55:16.441061       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0806 07:55:16.441073       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0806 07:55:16.518713       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0806 07:55:16.518762       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0806 07:55:16.521154       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0806 07:55:16.521996       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0806 07:55:16.522034       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0806 07:55:16.522048       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0806 07:55:16.622714       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 06 07:55:30 multinode-100000 kubelet[1433]: I0806 07:55:30.571869    1433 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 06 07:55:48 multinode-100000 kubelet[1433]: I0806 07:55:48.288563    1433 scope.go:117] "RemoveContainer" containerID="47e0c0c6895efd6bd0264e9afb736e61f50a8c70e25de98ffd02403f7fb7310a"
	Aug 06 07:55:48 multinode-100000 kubelet[1433]: I0806 07:55:48.288817    1433 scope.go:117] "RemoveContainer" containerID="1657841fe266c266f07d6abb260785bef1a97d49711c6b920d89664be20e8bbc"
	Aug 06 07:55:48 multinode-100000 kubelet[1433]: E0806 07:55:48.288923    1433 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(38b20fa5-6002-4e12-860f-1aa0047581b1)\"" pod="kube-system/storage-provisioner" podUID="38b20fa5-6002-4e12-860f-1aa0047581b1"
	Aug 06 07:55:58 multinode-100000 kubelet[1433]: I0806 07:55:58.852837    1433 scope.go:117] "RemoveContainer" containerID="1657841fe266c266f07d6abb260785bef1a97d49711c6b920d89664be20e8bbc"
	Aug 06 07:56:12 multinode-100000 kubelet[1433]: E0806 07:56:12.873478    1433 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:56:12 multinode-100000 kubelet[1433]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:56:12 multinode-100000 kubelet[1433]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:56:12 multinode-100000 kubelet[1433]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:56:12 multinode-100000 kubelet[1433]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:57:12 multinode-100000 kubelet[1433]: E0806 07:57:12.874876    1433 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:57:12 multinode-100000 kubelet[1433]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:57:12 multinode-100000 kubelet[1433]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:57:12 multinode-100000 kubelet[1433]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:57:12 multinode-100000 kubelet[1433]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:58:12 multinode-100000 kubelet[1433]: E0806 07:58:12.874929    1433 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:58:12 multinode-100000 kubelet[1433]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:58:12 multinode-100000 kubelet[1433]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:58:12 multinode-100000 kubelet[1433]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:58:12 multinode-100000 kubelet[1433]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 06 07:59:12 multinode-100000 kubelet[1433]: E0806 07:59:12.880466    1433 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 06 07:59:12 multinode-100000 kubelet[1433]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 06 07:59:12 multinode-100000 kubelet[1433]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 06 07:59:12 multinode-100000 kubelet[1433]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 06 07:59:12 multinode-100000 kubelet[1433]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-100000 -n multinode-100000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-100000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (292.25s)

                                                
                                    
x
+
TestScheduledStopUnix (142.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-572000 --memory=2048 --driver=hyperkit 
E0806 01:07:41.488611    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-572000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.844423651s)

                                                
                                                
-- stdout --
	* [scheduled-stop-572000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-572000" primary control-plane node in "scheduled-stop-572000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-572000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:23:52:b0:ac:49
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-572000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:be:b4:b0:ad:4b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:be:b4:b0:ad:4b
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-572000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-572000" primary control-plane node in "scheduled-stop-572000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-572000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for f6:23:52:b0:ac:49
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-572000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:be:b4:b0:ad:4b
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 36:be:b4:b0:ad:4b
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-06 01:08:01.830622 -0700 PDT m=+3851.005150706
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-572000 -n scheduled-stop-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-572000 -n scheduled-stop-572000: exit status 7 (77.379185ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:08:01.906315    6360 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:08:01.906342    6360 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-572000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-572000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-572000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-572000: (5.248292605s)
--- FAIL: TestScheduledStopUnix (142.17s)

                                                
                                    
x
+
TestPause/serial/Start (141.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-051000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-051000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m21.174380258s)

                                                
                                                
-- stdout --
	* [pause-051000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-051000" primary control-plane node in "pause-051000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-051000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for fa:4b:72:a:5b:dd
	* Failed to start hyperkit VM. Running "minikube delete -p pause-051000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:fa:6b:b4:3e:3e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e2:fa:6b:b4:3e:3e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-051000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-051000 -n pause-051000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-051000 -n pause-051000: exit status 7 (79.946481ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:48:47.575104    8650 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0806 01:48:47.575125    8650 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-051000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (194.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --driver=hyperkit 
E0806 01:49:43.557194    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --driver=hyperkit : exit status 90 (1m13.717044932s)

                                                
                                                
-- stdout --
	* [NoKubernetes-883000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-883000
	* Updating the running hyperkit "NoKubernetes-883000" VM ...
	  - Kubernetes: Stopping ...
	  - Kubernetes: Stopped
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 08:49:01 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:01.274677964Z" level=info msg="Starting up"
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:01.275369845Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:01.275973884Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.291309732Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306254559Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306314372Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306378930Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306431840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306505980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306543093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306684320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306724423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306755711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306784212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306871131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.307046233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308636497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308691334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308823616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308865756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308952363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.309015995Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319015822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319105487Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319153205Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319199562Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319237126Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319326692Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319537182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319654784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319694863Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319732865Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319765994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319800003Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319832450Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319862782Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319903456Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319942997Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319976301Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320005313Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320042988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320115816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320148106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320183466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320214312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320247022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320276187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320305067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320344348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320378986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320408274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320436772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320465286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320495451Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320530293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320561447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320592669Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320676575Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320721067Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320751898Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320781035Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320810928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320840767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320868697Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321068806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321154341Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321250650Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321293140Z" level=info msg="containerd successfully booted in 0.030427s"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.329898470Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.337967669Z" level=info msg="Loading containers: start."
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.427832256Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.521130146Z" level=info msg="Loading containers: done."
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.532627500Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.533701701Z" level=info msg="Daemon has completed initialization"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.559788258Z" level=info msg="API listen on [::]:2376"
	Aug 06 08:49:02 NoKubernetes-883000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.560517192Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.586072117Z" level=info msg="Processing signal 'terminated'"
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.586882376Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.587052611Z" level=info msg="Daemon shutdown complete"
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.587101412Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.587114174Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 08:49:03 NoKubernetes-883000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 08:49:04 NoKubernetes-883000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 08:49:04 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:49:04 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:04.618318101Z" level=info msg="Starting up"
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:04.618753699Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:04.619321272Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=923
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.637486638Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653755730Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653854129Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653934565Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653976437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654018853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654050021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654182598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654220207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654251389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654285840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654326844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654433287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656009354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656054395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656175358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656216448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656252175Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656285625Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656465349Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656515541Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656547554Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656579372Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656609667Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656660545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656861504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656979629Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657015845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657046787Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657077137Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657111520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657141131Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657175495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657207831Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657240870Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657270359Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657301511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657347226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657382740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657412812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657443441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657472621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657502459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657532152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657561548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657590677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657623645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657657021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657686205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657714760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657745231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657779007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657809440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657838933Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657910972Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657958968Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657991201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658020122Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658077548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658115466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658144220Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658328310Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658414878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658473905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658518330Z" level=info msg="containerd successfully booted in 0.021616s"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.677576067Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.680649829Z" level=info msg="Loading containers: start."
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.750257598Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.805287456Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.850009360Z" level=info msg="Loading containers: done."
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.862001751Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.862095231Z" level=info msg="Daemon has completed initialization"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.879698335Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.879844545Z" level=info msg="API listen on [::]:2376"
	Aug 06 08:49:05 NoKubernetes-883000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.694269435Z" level=info msg="Processing signal 'terminated'"
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695168665Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 08:49:10 NoKubernetes-883000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695757036Z" level=info msg="Daemon shutdown complete"
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695858545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695905825Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 08:49:11 NoKubernetes-883000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 08:49:11 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:49:11 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:11.731728754Z" level=info msg="Starting up"
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:11.732251876Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:11.732813964Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.749289827Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765721549Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765775123Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765808534Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765819468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765840566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765849814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766013617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766050243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766065302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766075818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766093828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766172257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.767858332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.767900257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768017194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768053151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768097173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768113532Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768256116Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768304007Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768317426Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768327622Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768337505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768378465Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768523714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768589418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768626724Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768638812Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768654854Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768669311Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768678388Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768694745Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768707580Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768716662Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768725445Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768734197Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768748599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768769090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768781170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768790602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768801716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768825953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768862517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768885091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768901966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768924476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768961685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768973137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768982034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768993972Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769057348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769066458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769074171Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769124032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769139149Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769147507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769155756Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769162659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769171941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769179387Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769338116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769401248Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769430321Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769464634Z" level=info msg="containerd successfully booted in 0.020797s"
	Aug 06 08:49:12 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:12.776494839Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.222187609Z" level=info msg="Loading containers: start."
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.293026496Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.354381111Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.396451137Z" level=info msg="Loading containers: done."
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.407341841Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.407425697Z" level=info msg="Daemon has completed initialization"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.428235133Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.428309620Z" level=info msg="API listen on [::]:2376"
	Aug 06 08:49:13 NoKubernetes-883000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319485654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319561053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319630126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319715279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327534147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327593543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327610661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327706415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.329842522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.329945225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330664506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330743278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330754269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330840188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.331270932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.331415632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.520962130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.521031407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.521043878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.521649415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563582183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563631061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563639594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563694527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563395434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563460569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563473714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563542123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.581795146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.581861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.581873610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.582032809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.347541579Z" level=info msg="shim disconnected" id=a9f9f0c068b4a199649f6e871e913fd8ea6b693138b857faa767d69202bec18c namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.347607165Z" level=warning msg="cleaning up after shim disconnected" id=a9f9f0c068b4a199649f6e871e913fd8ea6b693138b857faa767d69202bec18c namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.347617439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.348600421Z" level=info msg="ignoring event" container=a9f9f0c068b4a199649f6e871e913fd8ea6b693138b857faa767d69202bec18c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.352761591Z" level=info msg="ignoring event" container=afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.352985043Z" level=info msg="shim disconnected" id=afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.353112588Z" level=warning msg="cleaning up after shim disconnected" id=afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.353154168Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.360845906Z" level=info msg="ignoring event" container=dcb28844b302a887ae1aeceb30dea829e3ca11adf41d86533fc0db823fd60088 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.361136610Z" level=info msg="shim disconnected" id=dcb28844b302a887ae1aeceb30dea829e3ca11adf41d86533fc0db823fd60088 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.361260651Z" level=warning msg="cleaning up after shim disconnected" id=dcb28844b302a887ae1aeceb30dea829e3ca11adf41d86533fc0db823fd60088 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.361303843Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.363135149Z" level=info msg="ignoring event" container=60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.363243371Z" level=info msg="shim disconnected" id=60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.363311310Z" level=warning msg="cleaning up after shim disconnected" id=60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.363321562Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.387748663Z" level=info msg="shim disconnected" id=452adc1fe0e1b44dc2788481c2f0a73eba9b359c4d93f53ec6c2589507d689a7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.387857984Z" level=warning msg="cleaning up after shim disconnected" id=452adc1fe0e1b44dc2788481c2f0a73eba9b359c4d93f53ec6c2589507d689a7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.387893428Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.388660855Z" level=info msg="ignoring event" container=452adc1fe0e1b44dc2788481c2f0a73eba9b359c4d93f53ec6c2589507d689a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.390035682Z" level=info msg="shim disconnected" id=3612cce423ad7cf7143ebcb176ac26e1a9af335107cb4d742eaa53fcb5bc6d5e namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.390162234Z" level=info msg="ignoring event" container=3612cce423ad7cf7143ebcb176ac26e1a9af335107cb4d742eaa53fcb5bc6d5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.390235887Z" level=warning msg="cleaning up after shim disconnected" id=3612cce423ad7cf7143ebcb176ac26e1a9af335107cb4d742eaa53fcb5bc6d5e namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.390277850Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.393513917Z" level=info msg="shim disconnected" id=1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.393650959Z" level=info msg="ignoring event" container=1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.395534820Z" level=warning msg="cleaning up after shim disconnected" id=1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.395598213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.401432763Z" level=warning msg="cleanup warnings time=\"2024-08-06T08:49:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.428796226Z" level=warning msg="cleanup warnings time=\"2024-08-06T08:49:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.438229931Z" level=warning msg="cleanup warnings time=\"2024-08-06T08:49:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:38.330846636Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:38.371145144Z" level=info msg="ignoring event" container=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:38.371900583Z" level=info msg="shim disconnected" id=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32 namespace=moby
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:38.372105853Z" level=warning msg="cleaning up after shim disconnected" id=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32 namespace=moby
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:38.372208024Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.130759163Z" level=info msg="Processing signal 'terminated'"
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131783672Z" level=info msg="Daemon shutdown complete"
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131883228Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131950296Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131963533Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 08:49:39 NoKubernetes-883000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: docker.service: Consumed 1.283s CPU time.
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:40 NoKubernetes-883000 dockerd[2669]: time="2024-08-06T08:49:40.165462276Z" level=info msg="Starting up"
	Aug 06 08:50:40 NoKubernetes-883000 dockerd[2669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-883000 -n NoKubernetes-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-883000 -n NoKubernetes-883000: exit status 2 (152.964522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestNoKubernetes/serial/StartWithStopK8s FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-883000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p NoKubernetes-883000 logs -n 25: (2m0.734744704s)
helpers_test.go:252: TestNoKubernetes/serial/StartWithStopK8s logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | status kubelet --all --full                          |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo journalctl                       | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo docker                           | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo                                  | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo cat                              | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo containerd                       | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT |                     |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo systemctl                        | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo find                             | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-060000 sudo crio                             | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-060000                                       | auto-060000    | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT | 06 Aug 24 01:50 PDT |
	| start   | -p kindnet-060000                                    | kindnet-060000 | jenkins | v1.33.1 | 06 Aug 24 01:50 PDT |                     |
	|         | --memory=3072                                        |                |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=kindnet                                        |                |         |         |                     |                     |
	|         | --driver=hyperkit                                    |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 01:50:22
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 01:50:22.061978    8911 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:50:22.065968    8911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:50:22.065976    8911 out.go:304] Setting ErrFile to fd 2...
	I0806 01:50:22.065986    8911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:50:22.066181    8911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 01:50:22.068000    8911 out.go:298] Setting JSON to false
	I0806 01:50:22.092545    8911 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6584,"bootTime":1722927638,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 01:50:22.092644    8911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:50:22.153439    8911 out.go:177] * [kindnet-060000] minikube v1.33.1 on Darwin 14.5
	I0806 01:50:22.173160    8911 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:50:22.173160    8911 notify.go:220] Checking for updates...
	I0806 01:50:22.195447    8911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:50:22.216437    8911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 01:50:22.237243    8911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:50:22.258549    8911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:50:22.279325    8911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:50:22.301249    8911 config.go:182] Loaded profile config "NoKubernetes-883000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0806 01:50:22.301445    8911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:50:22.331692    8911 out.go:177] * Using the hyperkit driver based on user configuration
	I0806 01:50:22.373319    8911 start.go:297] selected driver: hyperkit
	I0806 01:50:22.373345    8911 start.go:901] validating driver "hyperkit" against <nil>
	I0806 01:50:22.373363    8911 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:50:22.377690    8911 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:50:22.377797    8911 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 01:50:22.386022    8911 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 01:50:22.389839    8911 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:50:22.389859    8911 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 01:50:22.389899    8911 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 01:50:22.390129    8911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:50:22.390156    8911 cni.go:84] Creating CNI manager for "kindnet"
	I0806 01:50:22.390162    8911 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0806 01:50:22.390223    8911 start.go:340] cluster config:
	{Name:kindnet-060000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRun
time:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:50:22.390319    8911 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:50:22.432518    8911 out.go:177] * Starting "kindnet-060000" primary control-plane node in "kindnet-060000" cluster
	I0806 01:50:22.453389    8911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 01:50:22.453461    8911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 01:50:22.453483    8911 cache.go:56] Caching tarball of preloaded images
	I0806 01:50:22.453721    8911 preload.go:172] Found /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0806 01:50:22.453744    8911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 01:50:22.453936    8911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/kindnet-060000/config.json ...
	I0806 01:50:22.453990    8911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/kindnet-060000/config.json: {Name:mk128fcd0605cd66810ba529cac1ee2ae0e83f4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:50:22.454806    8911 start.go:360] acquireMachinesLock for kindnet-060000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:50:22.454934    8911 start.go:364] duration metric: took 104.141µs to acquireMachinesLock for "kindnet-060000"
	I0806 01:50:22.455000    8911 start.go:93] Provisioning new machine with config: &{Name:kindnet-060000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:kindnet-060000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:50:22.455093    8911 start.go:125] createHost starting for "" (driver="hyperkit")
	I0806 01:50:22.497632    8911 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0806 01:50:22.497896    8911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:50:22.497960    8911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:50:22.508124    8911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54976
	I0806 01:50:22.508480    8911 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:50:22.508905    8911 main.go:141] libmachine: Using API Version  1
	I0806 01:50:22.508914    8911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:50:22.509123    8911 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:50:22.509308    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetMachineName
	I0806 01:50:22.509413    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:22.509528    8911 start.go:159] libmachine.API.Create for "kindnet-060000" (driver="hyperkit")
	I0806 01:50:22.509549    8911 client.go:168] LocalClient.Create starting
	I0806 01:50:22.509582    8911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem
	I0806 01:50:22.509633    8911 main.go:141] libmachine: Decoding PEM data...
	I0806 01:50:22.509654    8911 main.go:141] libmachine: Parsing certificate...
	I0806 01:50:22.509712    8911 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem
	I0806 01:50:22.509750    8911 main.go:141] libmachine: Decoding PEM data...
	I0806 01:50:22.509762    8911 main.go:141] libmachine: Parsing certificate...
	I0806 01:50:22.509775    8911 main.go:141] libmachine: Running pre-create checks...
	I0806 01:50:22.509786    8911 main.go:141] libmachine: (kindnet-060000) Calling .PreCreateCheck
	I0806 01:50:22.509870    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:22.510022    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetConfigRaw
	I0806 01:50:22.510505    8911 main.go:141] libmachine: Creating machine...
	I0806 01:50:22.510514    8911 main.go:141] libmachine: (kindnet-060000) Calling .Create
	I0806 01:50:22.510578    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:22.510689    8911 main.go:141] libmachine: (kindnet-060000) DBG | I0806 01:50:22.510575    8919 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:50:22.510752    8911 main.go:141] libmachine: (kindnet-060000) Downloading /Users/jenkins/minikube-integration/19370-944/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0806 01:50:22.700250    8911 main.go:141] libmachine: (kindnet-060000) DBG | I0806 01:50:22.700157    8919 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/id_rsa...
	I0806 01:50:22.800341    8911 main.go:141] libmachine: (kindnet-060000) DBG | I0806 01:50:22.800306    8919 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/kindnet-060000.rawdisk...
	I0806 01:50:22.800357    8911 main.go:141] libmachine: (kindnet-060000) DBG | Writing magic tar header
	I0806 01:50:22.800363    8911 main.go:141] libmachine: (kindnet-060000) DBG | Writing SSH key tar header
	I0806 01:50:22.801089    8911 main.go:141] libmachine: (kindnet-060000) DBG | I0806 01:50:22.801003    8919 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000 ...
	I0806 01:50:23.188926    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:23.188943    8911 main.go:141] libmachine: (kindnet-060000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/hyperkit.pid
	I0806 01:50:23.188960    8911 main.go:141] libmachine: (kindnet-060000) DBG | Using UUID 657eab68-7302-4b38-a41a-138c316c2d7b
	I0806 01:50:23.214245    8911 main.go:141] libmachine: (kindnet-060000) DBG | Generated MAC 36:8a:45:d3:3c:9b
	I0806 01:50:23.214269    8911 main.go:141] libmachine: (kindnet-060000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-060000
	I0806 01:50:23.214311    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"657eab68-7302-4b38-a41a-138c316c2d7b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:50:23.214352    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"657eab68-7302-4b38-a41a-138c316c2d7b", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0806 01:50:23.214425    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "657eab68-7302-4b38-a41a-138c316c2d7b", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/kindnet-060000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machin
es/kindnet-060000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-060000"}
	I0806 01:50:23.214477    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 657eab68-7302-4b38-a41a-138c316c2d7b -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/kindnet-060000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/tty,log=/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/console-ring -f kexec,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/bzimage,/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/initrd,earlyprintk=serial loglevel=3 console
=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=kindnet-060000"
	I0806 01:50:23.214506    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0806 01:50:23.217461    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 DEBUG: hyperkit: Pid is 8921
	I0806 01:50:23.217925    8911 main.go:141] libmachine: (kindnet-060000) DBG | Attempt 0
	I0806 01:50:23.217938    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:23.218002    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:23.219236    8911 main.go:141] libmachine: (kindnet-060000) DBG | Searching for 36:8a:45:d3:3c:9b in /var/db/dhcpd_leases ...
	I0806 01:50:23.219343    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0806 01:50:23.219356    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:3e:ec:8c:ea:d0:40 ID:1,3e:ec:8c:ea:d0:40 Lease:0x66b33508}
	I0806 01:50:23.219373    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:5a:77:15:92:25:15 ID:1,5a:77:15:92:25:15 Lease:0x66b334fa}
	I0806 01:50:23.219383    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:b2:1c:70:51:23 ID:1,16:b2:1c:70:51:23 Lease:0x66b334a9}
	I0806 01:50:23.219430    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:d6:6d:bb:3b:ac:32 ID:1,d6:6d:bb:3b:ac:32 Lease:0x66b331c9}
	I0806 01:50:23.219446    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:a4:ab:b6:4f:9d ID:1,6a:a4:ab:b6:4f:9d Lease:0x66b32eef}
	I0806 01:50:23.219542    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:50:23.219586    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:50:23.219601    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:50:23.219611    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:50:23.219639    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:50:23.219667    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:50:23.219695    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:50:23.219714    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:50:23.219728    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:50:23.219743    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:50:23.219755    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:50:23.219770    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:50:23.219782    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:50:23.219794    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:50:23.219804    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:50:23.219819    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:50:23.219834    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:50:23.225003    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0806 01:50:23.234089    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0806 01:50:23.235001    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:50:23.235033    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:50:23.235045    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:50:23.235062    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:50:23.639456    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0806 01:50:23.639472    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0806 01:50:23.754808    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0806 01:50:23.754823    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0806 01:50:23.754830    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0806 01:50:23.754845    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0806 01:50:23.755401    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0806 01:50:23.755410    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:23 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0806 01:50:25.219805    8911 main.go:141] libmachine: (kindnet-060000) DBG | Attempt 1
	I0806 01:50:25.219821    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:25.219830    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:25.220732    8911 main.go:141] libmachine: (kindnet-060000) DBG | Searching for 36:8a:45:d3:3c:9b in /var/db/dhcpd_leases ...
	I0806 01:50:25.220778    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0806 01:50:25.220786    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:3e:ec:8c:ea:d0:40 ID:1,3e:ec:8c:ea:d0:40 Lease:0x66b33508}
	I0806 01:50:25.220808    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:5a:77:15:92:25:15 ID:1,5a:77:15:92:25:15 Lease:0x66b334fa}
	I0806 01:50:25.220823    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:b2:1c:70:51:23 ID:1,16:b2:1c:70:51:23 Lease:0x66b334a9}
	I0806 01:50:25.220831    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:d6:6d:bb:3b:ac:32 ID:1,d6:6d:bb:3b:ac:32 Lease:0x66b331c9}
	I0806 01:50:25.220838    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:a4:ab:b6:4f:9d ID:1,6a:a4:ab:b6:4f:9d Lease:0x66b32eef}
	I0806 01:50:25.220844    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:50:25.220868    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:50:25.220882    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:50:25.220892    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:50:25.220901    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:50:25.220908    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:50:25.220914    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:50:25.220925    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:50:25.220937    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:50:25.220951    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:50:25.220966    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:50:25.220974    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:50:25.220982    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:50:25.220988    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:50:25.220994    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:50:25.221000    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:50:25.221014    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:50:27.222707    8911 main.go:141] libmachine: (kindnet-060000) DBG | Attempt 2
	I0806 01:50:27.222723    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:27.222806    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:27.223607    8911 main.go:141] libmachine: (kindnet-060000) DBG | Searching for 36:8a:45:d3:3c:9b in /var/db/dhcpd_leases ...
	I0806 01:50:27.223644    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0806 01:50:27.223651    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:3e:ec:8c:ea:d0:40 ID:1,3e:ec:8c:ea:d0:40 Lease:0x66b33508}
	I0806 01:50:27.223659    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:5a:77:15:92:25:15 ID:1,5a:77:15:92:25:15 Lease:0x66b334fa}
	I0806 01:50:27.223667    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:b2:1c:70:51:23 ID:1,16:b2:1c:70:51:23 Lease:0x66b334a9}
	I0806 01:50:27.223676    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:d6:6d:bb:3b:ac:32 ID:1,d6:6d:bb:3b:ac:32 Lease:0x66b331c9}
	I0806 01:50:27.223682    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:a4:ab:b6:4f:9d ID:1,6a:a4:ab:b6:4f:9d Lease:0x66b32eef}
	I0806 01:50:27.223689    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:50:27.223695    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:50:27.223714    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:50:27.223728    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:50:27.223737    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:50:27.223746    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:50:27.223753    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:50:27.223761    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:50:27.223768    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:50:27.223775    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:50:27.223782    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:50:27.223790    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:50:27.223809    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:50:27.223814    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:50:27.223822    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:50:27.223829    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:50:27.223841    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:50:29.225815    8911 main.go:141] libmachine: (kindnet-060000) DBG | Attempt 3
	I0806 01:50:29.225835    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:29.225945    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:29.226740    8911 main.go:141] libmachine: (kindnet-060000) DBG | Searching for 36:8a:45:d3:3c:9b in /var/db/dhcpd_leases ...
	I0806 01:50:29.226790    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0806 01:50:29.226802    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:3e:ec:8c:ea:d0:40 ID:1,3e:ec:8c:ea:d0:40 Lease:0x66b33508}
	I0806 01:50:29.226822    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:5a:77:15:92:25:15 ID:1,5a:77:15:92:25:15 Lease:0x66b334fa}
	I0806 01:50:29.226831    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:b2:1c:70:51:23 ID:1,16:b2:1c:70:51:23 Lease:0x66b334a9}
	I0806 01:50:29.226843    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:d6:6d:bb:3b:ac:32 ID:1,d6:6d:bb:3b:ac:32 Lease:0x66b331c9}
	I0806 01:50:29.226855    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:a4:ab:b6:4f:9d ID:1,6a:a4:ab:b6:4f:9d Lease:0x66b32eef}
	I0806 01:50:29.226874    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:50:29.226887    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:50:29.226897    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:50:29.226905    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:50:29.226921    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:50:29.226928    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:50:29.226934    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:50:29.226943    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:50:29.226952    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:50:29.226960    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:50:29.226967    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:50:29.226975    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:50:29.226982    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:50:29.226990    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:50:29.227002    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:50:29.227019    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:50:29.227038    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:50:29.342530    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:29 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0806 01:50:29.342599    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:29 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0806 01:50:29.342607    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:29 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0806 01:50:29.366117    8911 main.go:141] libmachine: (kindnet-060000) DBG | 2024/08/06 01:50:29 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0806 01:50:31.228405    8911 main.go:141] libmachine: (kindnet-060000) DBG | Attempt 4
	I0806 01:50:31.228421    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:31.228535    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:31.229345    8911 main.go:141] libmachine: (kindnet-060000) DBG | Searching for 36:8a:45:d3:3c:9b in /var/db/dhcpd_leases ...
	I0806 01:50:31.229406    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found 22 entries in /var/db/dhcpd_leases!
	I0806 01:50:31.229416    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:3e:ec:8c:ea:d0:40 ID:1,3e:ec:8c:ea:d0:40 Lease:0x66b33508}
	I0806 01:50:31.229426    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:5a:77:15:92:25:15 ID:1,5a:77:15:92:25:15 Lease:0x66b334fa}
	I0806 01:50:31.229432    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:16:b2:1c:70:51:23 ID:1,16:b2:1c:70:51:23 Lease:0x66b334a9}
	I0806 01:50:31.229439    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:d6:6d:bb:3b:ac:32 ID:1,d6:6d:bb:3b:ac:32 Lease:0x66b331c9}
	I0806 01:50:31.229445    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6a:a4:ab:b6:4f:9d ID:1,6a:a4:ab:b6:4f:9d Lease:0x66b32eef}
	I0806 01:50:31.229475    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:c2:6a:9f:16:92:98 ID:1,c2:6a:9f:16:92:98 Lease:0x66b32b74}
	I0806 01:50:31.229485    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:5e:cf:f7:36:15:fc ID:1,5e:cf:f7:36:15:fc Lease:0x66b32ab4}
	I0806 01:50:31.229492    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a2:49:4c:a:53:51 ID:1,a2:49:4c:a:53:51 Lease:0x66b1d897}
	I0806 01:50:31.229500    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:4e:ad:42:3:c5:ed ID:1,4e:ad:42:3:c5:ed Lease:0x66b32911}
	I0806 01:50:31.229507    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:ee:b:b7:3a:75:5c ID:1,ee:b:b7:3a:75:5c Lease:0x66b329d0}
	I0806 01:50:31.229514    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:1a:eb:5b:3:28:91 ID:1,1a:eb:5b:3:28:91 Lease:0x66b3297d}
	I0806 01:50:31.229522    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:ee:93:d:c6:66:77 ID:1,ee:93:d:c6:66:77 Lease:0x66b32310}
	I0806 01:50:31.229529    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:86:6d:d0:27:68:33 ID:1,86:6d:d0:27:68:33 Lease:0x66b322e9}
	I0806 01:50:31.229537    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:96:5:a5:0:af:d ID:1,96:5:a5:0:af:d Lease:0x66b322a7}
	I0806 01:50:31.229545    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:b2:cf:29:cb:a7:4a ID:1,b2:cf:29:cb:a7:4a Lease:0x66b32278}
	I0806 01:50:31.229552    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2a:1c:31:39:be:c1 ID:1,2a:1c:31:39:be:c1 Lease:0x66b321d0}
	I0806 01:50:31.229558    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:1e:e3:7:32:3f:12 ID:1,1e:e3:7:32:3f:12 Lease:0x66b1d07e}
	I0806 01:50:31.229569    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:8e:62:c0:2:2f:96 ID:1,8e:62:c0:2:2f:96 Lease:0x66b32172}
	I0806 01:50:31.229579    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:d2:ca:81:24:8f:65 ID:1,d2:ca:81:24:8f:65 Lease:0x66b32223}
	I0806 01:50:31.229586    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:fe:4c:4b:3:9:97 ID:1,fe:4c:4b:3:9:97 Lease:0x66b31ee0}
	I0806 01:50:31.229591    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:de:a5:44:c0:ca:f ID:1,de:a5:44:c0:ca:f Lease:0x66b31e1a}
	I0806 01:50:31.229599    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:a:10:7e:5:a7:1 ID:1,a:10:7e:5:a7:1 Lease:0x66b31c9a}
	I0806 01:50:33.231384    8911 main.go:141] libmachine: (kindnet-060000) DBG | Attempt 5
	I0806 01:50:33.231402    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:33.231502    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:33.232325    8911 main.go:141] libmachine: (kindnet-060000) DBG | Searching for 36:8a:45:d3:3c:9b in /var/db/dhcpd_leases ...
	I0806 01:50:33.232379    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found 23 entries in /var/db/dhcpd_leases!
	I0806 01:50:33.232392    8911 main.go:141] libmachine: (kindnet-060000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:36:8a:45:d3:3c:9b ID:1,36:8a:45:d3:3c:9b Lease:0x66b33558}
	I0806 01:50:33.232400    8911 main.go:141] libmachine: (kindnet-060000) DBG | Found match: 36:8a:45:d3:3c:9b
	I0806 01:50:33.232405    8911 main.go:141] libmachine: (kindnet-060000) DBG | IP: 192.169.0.24
	I0806 01:50:33.232450    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetConfigRaw
	I0806 01:50:33.233089    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:33.233214    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:33.233319    8911 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0806 01:50:33.233333    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetState
	I0806 01:50:33.233423    8911 main.go:141] libmachine: (kindnet-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:50:33.233490    8911 main.go:141] libmachine: (kindnet-060000) DBG | hyperkit pid from json: 8921
	I0806 01:50:33.234300    8911 main.go:141] libmachine: Detecting operating system of created instance...
	I0806 01:50:33.234311    8911 main.go:141] libmachine: Waiting for SSH to be available...
	I0806 01:50:33.234318    8911 main.go:141] libmachine: Getting to WaitForSSH function...
	I0806 01:50:33.234325    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:33.234405    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:33.234510    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:33.234610    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:33.234706    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:33.234819    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:33.235033    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:33.235040    8911 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0806 01:50:34.290632    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 01:50:34.290646    8911 main.go:141] libmachine: Detecting the provisioner...
	I0806 01:50:34.290651    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.290817    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.290906    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.291005    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.291105    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.291241    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:34.291381    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:34.291389    8911 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0806 01:50:34.347068    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0806 01:50:34.347108    8911 main.go:141] libmachine: found compatible host: buildroot
	I0806 01:50:34.347113    8911 main.go:141] libmachine: Provisioning with buildroot...
	I0806 01:50:34.347118    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetMachineName
	I0806 01:50:34.347258    8911 buildroot.go:166] provisioning hostname "kindnet-060000"
	I0806 01:50:34.347267    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetMachineName
	I0806 01:50:34.347365    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.347462    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.347555    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.347645    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.347737    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.347864    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:34.347994    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:34.348003    8911 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-060000 && echo "kindnet-060000" | sudo tee /etc/hostname
	I0806 01:50:34.411034    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-060000
	
	I0806 01:50:34.411055    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.411185    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.411291    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.411387    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.411484    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.411609    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:34.411758    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:34.411768    8911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-060000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-060000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-060000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 01:50:34.470202    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 01:50:34.470225    8911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 01:50:34.470241    8911 buildroot.go:174] setting up certificates
	I0806 01:50:34.470251    8911 provision.go:84] configureAuth start
	I0806 01:50:34.470258    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetMachineName
	I0806 01:50:34.470402    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetIP
	I0806 01:50:34.470488    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.470567    8911 provision.go:143] copyHostCerts
	I0806 01:50:34.470664    8911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 01:50:34.470674    8911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 01:50:34.470823    8911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 01:50:34.471050    8911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 01:50:34.471056    8911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 01:50:34.471162    8911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 01:50:34.471367    8911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 01:50:34.471373    8911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 01:50:34.471550    8911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 01:50:34.471694    8911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.kindnet-060000 san=[127.0.0.1 192.169.0.24 kindnet-060000 localhost minikube]
	I0806 01:50:34.639995    8911 provision.go:177] copyRemoteCerts
	I0806 01:50:34.640057    8911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 01:50:34.640073    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.640207    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.640313    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.640438    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.640533    8911 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/id_rsa Username:docker}
	I0806 01:50:34.674423    8911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 01:50:34.694647    8911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0806 01:50:34.715426    8911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0806 01:50:34.735235    8911 provision.go:87] duration metric: took 264.970662ms to configureAuth
	I0806 01:50:34.735248    8911 buildroot.go:189] setting minikube options for container-runtime
	I0806 01:50:34.735392    8911 config.go:182] Loaded profile config "kindnet-060000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:50:34.735408    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:34.735607    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.735697    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.735781    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.735867    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.735939    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.736048    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:34.736180    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:34.736188    8911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 01:50:34.789752    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 01:50:34.789763    8911 buildroot.go:70] root file system type: tmpfs
	I0806 01:50:34.789842    8911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 01:50:34.789854    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.789990    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.790101    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.790190    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.790282    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.790415    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:34.790549    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:34.790597    8911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 01:50:34.852150    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 01:50:34.852175    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:34.852314    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:34.852418    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.852511    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:34.852597    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:34.852730    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:34.852881    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:34.852894    8911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 01:50:36.415139    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0806 01:50:36.415160    8911 main.go:141] libmachine: Checking connection to Docker...
	I0806 01:50:36.415172    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetURL
	I0806 01:50:36.415313    8911 main.go:141] libmachine: Docker is up and running!
	I0806 01:50:36.415319    8911 main.go:141] libmachine: Reticulating splines...
	I0806 01:50:36.415334    8911 client.go:171] duration metric: took 13.905810437s to LocalClient.Create
	I0806 01:50:36.415346    8911 start.go:167] duration metric: took 13.9058606s to libmachine.API.Create "kindnet-060000"
	I0806 01:50:36.415357    8911 start.go:293] postStartSetup for "kindnet-060000" (driver="hyperkit")
	I0806 01:50:36.415365    8911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 01:50:36.415375    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:36.415528    8911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 01:50:36.415541    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:36.415627    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:36.415726    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:36.415842    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:36.415943    8911 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/id_rsa Username:docker}
	I0806 01:50:36.454903    8911 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 01:50:36.459179    8911 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 01:50:36.459196    8911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 01:50:36.459341    8911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 01:50:36.459534    8911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 01:50:36.459753    8911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 01:50:36.468858    8911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 01:50:36.502731    8911 start.go:296] duration metric: took 87.366057ms for postStartSetup
	I0806 01:50:36.502758    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetConfigRaw
	I0806 01:50:36.503421    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetIP
	I0806 01:50:36.503564    8911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/kindnet-060000/config.json ...
	I0806 01:50:36.503930    8911 start.go:128] duration metric: took 14.048864016s to createHost
	I0806 01:50:36.503950    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:36.504044    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:36.504137    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:36.504240    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:36.504342    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:36.504452    8911 main.go:141] libmachine: Using SSH client type: native
	I0806 01:50:36.504570    8911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3370c0] 0xb339e20 <nil>  [] 0s} 192.169.0.24 22 <nil> <nil>}
	I0806 01:50:36.504577    8911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 01:50:36.559240    8911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722934235.893028876
	
	I0806 01:50:36.559252    8911 fix.go:216] guest clock: 1722934235.893028876
	I0806 01:50:36.559257    8911 fix.go:229] Guest: 2024-08-06 01:50:35.893028876 -0700 PDT Remote: 2024-08-06 01:50:36.503938 -0700 PDT m=+14.518467328 (delta=-610.909124ms)
	I0806 01:50:36.559272    8911 fix.go:200] guest clock delta is within tolerance: -610.909124ms
	I0806 01:50:36.559282    8911 start.go:83] releasing machines lock for "kindnet-060000", held for 14.104379155s
	I0806 01:50:36.559308    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:36.559441    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetIP
	I0806 01:50:36.559555    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:36.559892    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:36.560012    8911 main.go:141] libmachine: (kindnet-060000) Calling .DriverName
	I0806 01:50:36.560122    8911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 01:50:36.560151    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:36.560164    8911 ssh_runner.go:195] Run: cat /version.json
	I0806 01:50:36.560174    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHHostname
	I0806 01:50:36.560264    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:36.560280    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHPort
	I0806 01:50:36.560365    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:36.560385    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHKeyPath
	I0806 01:50:36.560477    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:36.560480    8911 main.go:141] libmachine: (kindnet-060000) Calling .GetSSHUsername
	I0806 01:50:36.560559    8911 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/id_rsa Username:docker}
	I0806 01:50:36.560564    8911 sshutil.go:53] new ssh client: &{IP:192.169.0.24 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/kindnet-060000/id_rsa Username:docker}
	I0806 01:50:36.589182    8911 ssh_runner.go:195] Run: systemctl --version
	I0806 01:50:36.640154    8911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 01:50:36.645102    8911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 01:50:36.645147    8911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 01:50:36.659058    8911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0806 01:50:36.659075    8911 start.go:495] detecting cgroup driver to use...
	I0806 01:50:36.659220    8911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 01:50:36.674251    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 01:50:36.683884    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 01:50:36.692612    8911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 01:50:36.692661    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 01:50:36.701467    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 01:50:36.710216    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 01:50:36.718739    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 01:50:36.727418    8911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 01:50:36.736337    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 01:50:36.745885    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0806 01:50:36.754546    8911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0806 01:50:36.763297    8911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 01:50:36.771211    8911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 01:50:36.779094    8911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:50:36.873679    8911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 01:50:36.893412    8911 start.go:495] detecting cgroup driver to use...
	I0806 01:50:36.893492    8911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 01:50:36.907655    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 01:50:36.924512    8911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 01:50:36.939075    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 01:50:36.950555    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 01:50:36.961727    8911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0806 01:50:37.012685    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 01:50:37.024509    8911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 01:50:37.039464    8911 ssh_runner.go:195] Run: which cri-dockerd
	I0806 01:50:37.042326    8911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 01:50:37.050144    8911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 01:50:40.052084    8688 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.053245245s)
	I0806 01:50:40.052137    8688 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0806 01:50:40.100018    8688 out.go:177] 
	W0806 01:50:40.146880    8688 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 06 08:49:01 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:01.274677964Z" level=info msg="Starting up"
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:01.275369845Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:01.275973884Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.291309732Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306254559Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306314372Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306378930Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306431840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306505980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306543093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306684320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306724423Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306755711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306784212Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.306871131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.307046233Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308636497Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308691334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308823616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308865756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.308952363Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.309015995Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319015822Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319105487Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319153205Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319199562Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319237126Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319326692Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319537182Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319654784Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319694863Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319732865Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319765994Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319800003Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319832450Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319862782Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319903456Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319942997Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.319976301Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320005313Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320042988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320115816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320148106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320183466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320214312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320247022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320276187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320305067Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320344348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320378986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320408274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320436772Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320465286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320495451Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320530293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320561447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320592669Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320676575Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320721067Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320751898Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320781035Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320810928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320840767Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.320868697Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321068806Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321154341Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321250650Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 08:49:01 NoKubernetes-883000 dockerd[521]: time="2024-08-06T08:49:01.321293140Z" level=info msg="containerd successfully booted in 0.030427s"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.329898470Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.337967669Z" level=info msg="Loading containers: start."
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.427832256Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.521130146Z" level=info msg="Loading containers: done."
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.532627500Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.533701701Z" level=info msg="Daemon has completed initialization"
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.559788258Z" level=info msg="API listen on [::]:2376"
	Aug 06 08:49:02 NoKubernetes-883000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 08:49:02 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:02.560517192Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.586072117Z" level=info msg="Processing signal 'terminated'"
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.586882376Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.587052611Z" level=info msg="Daemon shutdown complete"
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.587101412Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 08:49:03 NoKubernetes-883000 dockerd[514]: time="2024-08-06T08:49:03.587114174Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 08:49:03 NoKubernetes-883000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 08:49:04 NoKubernetes-883000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 08:49:04 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:49:04 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:04.618318101Z" level=info msg="Starting up"
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:04.618753699Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:04.619321272Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=923
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.637486638Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653755730Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653854129Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653934565Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.653976437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654018853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654050021Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654182598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654220207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654251389Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654285840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654326844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.654433287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656009354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656054395Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656175358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656216448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656252175Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656285625Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656465349Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656515541Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656547554Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656579372Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656609667Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656660545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656861504Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.656979629Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657015845Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657046787Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657077137Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657111520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657141131Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657175495Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657207831Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657240870Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657270359Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657301511Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657347226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657382740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657412812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657443441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657472621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657502459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657532152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657561548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657590677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657623645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657657021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657686205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657714760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657745231Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657779007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657809440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657838933Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657910972Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657958968Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.657991201Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658020122Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658077548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658115466Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658144220Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658328310Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658414878Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658473905Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 08:49:04 NoKubernetes-883000 dockerd[923]: time="2024-08-06T08:49:04.658518330Z" level=info msg="containerd successfully booted in 0.021616s"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.677576067Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.680649829Z" level=info msg="Loading containers: start."
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.750257598Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.805287456Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.850009360Z" level=info msg="Loading containers: done."
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.862001751Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.862095231Z" level=info msg="Daemon has completed initialization"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.879698335Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 08:49:05 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:05.879844545Z" level=info msg="API listen on [::]:2376"
	Aug 06 08:49:05 NoKubernetes-883000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.694269435Z" level=info msg="Processing signal 'terminated'"
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695168665Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 06 08:49:10 NoKubernetes-883000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695757036Z" level=info msg="Daemon shutdown complete"
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695858545Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 08:49:10 NoKubernetes-883000 dockerd[916]: time="2024-08-06T08:49:10.695905825Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 08:49:11 NoKubernetes-883000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 08:49:11 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:49:11 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:11.731728754Z" level=info msg="Starting up"
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:11.732251876Z" level=info msg="containerd not running, starting managed containerd"
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:11.732813964Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1278
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.749289827Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765721549Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765775123Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765808534Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765819468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765840566Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.765849814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766013617Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766050243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766065302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766075818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766093828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.766172257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.767858332Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.767900257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768017194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768053151Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768097173Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768113532Z" level=info msg="metadata content store policy set" policy=shared
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768256116Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768304007Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768317426Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768327622Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768337505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768378465Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768523714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768589418Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768626724Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768638812Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768654854Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768669311Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768678388Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768694745Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768707580Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768716662Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768725445Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768734197Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768748599Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768769090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768781170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768790602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768801716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768825953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768862517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768885091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768901966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768924476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768961685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768973137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768982034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.768993972Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769057348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769066458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769074171Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769124032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769139149Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769147507Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769155756Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769162659Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769171941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769179387Z" level=info msg="NRI interface is disabled by configuration."
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769338116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769401248Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769430321Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 06 08:49:11 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:11.769464634Z" level=info msg="containerd successfully booted in 0.020797s"
	Aug 06 08:49:12 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:12.776494839Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.222187609Z" level=info msg="Loading containers: start."
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.293026496Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.354381111Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.396451137Z" level=info msg="Loading containers: done."
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.407341841Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.407425697Z" level=info msg="Daemon has completed initialization"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.428235133Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 06 08:49:13 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:13.428309620Z" level=info msg="API listen on [::]:2376"
	Aug 06 08:49:13 NoKubernetes-883000 systemd[1]: Started Docker Application Container Engine.
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319485654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319561053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319630126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.319715279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327534147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327593543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327610661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.327706415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.329842522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.329945225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330664506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330743278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330754269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.330840188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.331270932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.331415632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.520962130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.521031407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.521043878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.521649415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563582183Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563631061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563639594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563694527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563395434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563460569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563473714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.563542123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.581795146Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.581861237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.581873610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:19 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:19.582032809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.347541579Z" level=info msg="shim disconnected" id=a9f9f0c068b4a199649f6e871e913fd8ea6b693138b857faa767d69202bec18c namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.347607165Z" level=warning msg="cleaning up after shim disconnected" id=a9f9f0c068b4a199649f6e871e913fd8ea6b693138b857faa767d69202bec18c namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.347617439Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.348600421Z" level=info msg="ignoring event" container=a9f9f0c068b4a199649f6e871e913fd8ea6b693138b857faa767d69202bec18c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.352761591Z" level=info msg="ignoring event" container=afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.352985043Z" level=info msg="shim disconnected" id=afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.353112588Z" level=warning msg="cleaning up after shim disconnected" id=afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.353154168Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.360845906Z" level=info msg="ignoring event" container=dcb28844b302a887ae1aeceb30dea829e3ca11adf41d86533fc0db823fd60088 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.361136610Z" level=info msg="shim disconnected" id=dcb28844b302a887ae1aeceb30dea829e3ca11adf41d86533fc0db823fd60088 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.361260651Z" level=warning msg="cleaning up after shim disconnected" id=dcb28844b302a887ae1aeceb30dea829e3ca11adf41d86533fc0db823fd60088 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.361303843Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.363135149Z" level=info msg="ignoring event" container=60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.363243371Z" level=info msg="shim disconnected" id=60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.363311310Z" level=warning msg="cleaning up after shim disconnected" id=60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.363321562Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.387748663Z" level=info msg="shim disconnected" id=452adc1fe0e1b44dc2788481c2f0a73eba9b359c4d93f53ec6c2589507d689a7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.387857984Z" level=warning msg="cleaning up after shim disconnected" id=452adc1fe0e1b44dc2788481c2f0a73eba9b359c4d93f53ec6c2589507d689a7 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.387893428Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.388660855Z" level=info msg="ignoring event" container=452adc1fe0e1b44dc2788481c2f0a73eba9b359c4d93f53ec6c2589507d689a7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.390035682Z" level=info msg="shim disconnected" id=3612cce423ad7cf7143ebcb176ac26e1a9af335107cb4d742eaa53fcb5bc6d5e namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.390162234Z" level=info msg="ignoring event" container=3612cce423ad7cf7143ebcb176ac26e1a9af335107cb4d742eaa53fcb5bc6d5e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.390235887Z" level=warning msg="cleaning up after shim disconnected" id=3612cce423ad7cf7143ebcb176ac26e1a9af335107cb4d742eaa53fcb5bc6d5e namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.390277850Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.393513917Z" level=info msg="shim disconnected" id=1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:28.393650959Z" level=info msg="ignoring event" container=1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.395534820Z" level=warning msg="cleaning up after shim disconnected" id=1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14 namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.395598213Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.401432763Z" level=warning msg="cleanup warnings time=\"2024-08-06T08:49:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.428796226Z" level=warning msg="cleanup warnings time=\"2024-08-06T08:49:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 08:49:28 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:28.438229931Z" level=warning msg="cleanup warnings time=\"2024-08-06T08:49:28Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:38.330846636Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:38.371145144Z" level=info msg="ignoring event" container=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:38.371900583Z" level=info msg="shim disconnected" id=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32 namespace=moby
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:38.372105853Z" level=warning msg="cleaning up after shim disconnected" id=5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32 namespace=moby
	Aug 06 08:49:38 NoKubernetes-883000 dockerd[1278]: time="2024-08-06T08:49:38.372208024Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.130759163Z" level=info msg="Processing signal 'terminated'"
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131783672Z" level=info msg="Daemon shutdown complete"
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131883228Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131950296Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 06 08:49:39 NoKubernetes-883000 dockerd[1272]: time="2024-08-06T08:49:39.131963533Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 06 08:49:39 NoKubernetes-883000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: docker.service: Deactivated successfully.
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: docker.service: Consumed 1.283s CPU time.
	Aug 06 08:49:40 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:49:40 NoKubernetes-883000 dockerd[2669]: time="2024-08-06T08:49:40.165462276Z" level=info msg="Starting up"
	Aug 06 08:50:40 NoKubernetes-883000 dockerd[2669]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0806 01:50:40.147171    8688 out.go:239] * 
	W0806 01:50:40.147801    8688 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0806 01:50:40.208769    8688 out.go:177] 
	I0806 01:50:37.063407    8911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 01:50:37.162309    8911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 01:50:37.279353    8911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 01:50:37.279441    8911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 01:50:37.293886    8911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:50:37.403968    8911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 01:50:39.732000    8911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.328020198s)
	I0806 01:50:39.732060    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0806 01:50:39.744015    8911 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0806 01:50:39.757644    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 01:50:39.768605    8911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0806 01:50:39.865355    8911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0806 01:50:39.974846    8911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:50:40.079673    8911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0806 01:50:40.093301    8911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0806 01:50:40.104177    8911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:50:40.199259    8911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0806 01:50:40.259747    8911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0806 01:50:40.259878    8911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0806 01:50:40.268272    8911 start.go:563] Will wait 60s for crictl version
	I0806 01:50:40.268321    8911 ssh_runner.go:195] Run: which crictl
	I0806 01:50:40.271233    8911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0806 01:50:40.297758    8911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0806 01:50:40.297840    8911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0806 01:50:40.315562    8911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	
	
	==> Docker <==
	Aug 06 08:50:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:50:40Z" level=error msg="error getting RW layer size for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:50:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:50:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7'"
	Aug 06 08:50:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:50:40Z" level=error msg="error getting RW layer size for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:50:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:50:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3'"
	Aug 06 08:50:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:50:40Z" level=error msg="error getting RW layer size for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:50:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:50:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14'"
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:50:40 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:50:40 NoKubernetes-883000 dockerd[2866]: time="2024-08-06T08:50:40.515940153Z" level=info msg="Starting up"
	Aug 06 08:51:40 NoKubernetes-883000 dockerd[2866]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 08:51:40 NoKubernetes-883000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 08:51:40 NoKubernetes-883000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 08:51:40 NoKubernetes-883000 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="error getting RW layer size for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3'"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="error getting RW layer size for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7'"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="error getting RW layer size for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14'"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="error getting RW layer size for container ID '5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:51:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:51:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32'"
	Aug 06 08:51:40 NoKubernetes-883000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Aug 06 08:51:40 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:51:40 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-06T08:51:40Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Aug 6 08:49] systemd-fstab-generator[494]: Ignoring "noauto" option for root device
	[  +0.100351] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +1.794107] systemd-fstab-generator[843]: Ignoring "noauto" option for root device
	[  +0.294415] systemd-fstab-generator[882]: Ignoring "noauto" option for root device
	[  +0.117074] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.136119] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.058779] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.382518] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.106273] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.106414] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.140634] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +4.301732] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.064283] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.913490] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +3.585641] systemd-fstab-generator[1688]: Ignoring "noauto" option for root device
	[  +0.054828] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.971817] systemd-fstab-generator[2098]: Ignoring "noauto" option for root device
	[  +0.097112] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.121017] systemd-fstab-generator[2160]: Ignoring "noauto" option for root device
	[ +12.965816] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.217910] systemd-fstab-generator[2599]: Ignoring "noauto" option for root device
	[  +0.239962] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.105102] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.115417] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 08:52:41 up 3 min,  0 users,  load average: 0.02, 0.05, 0.02
	Linux NoKubernetes-883000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.553930    2106 topology_manager.go:215] "Topology Admit Handler" podUID="17bed36778b5b5f05ef4a5fbe8acd7b4" podNamespace="kube-system" podName="etcd-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625360    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-kubeconfig\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625390    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a340141d0bec8f6e3745c06f051cc90-k8s-certs\") pod \"kube-apiserver-nokubernetes-883000\" (UID: \"0a340141d0bec8f6e3745c06f051cc90\") " pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625409    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a340141d0bec8f6e3745c06f051cc90-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-883000\" (UID: \"0a340141d0bec8f6e3745c06f051cc90\") " pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625425    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-ca-certs\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625441    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-flexvolume-dir\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625452    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/17bed36778b5b5f05ef4a5fbe8acd7b4-etcd-certs\") pod \"etcd-nokubernetes-883000\" (UID: \"17bed36778b5b5f05ef4a5fbe8acd7b4\") " pod="kube-system/etcd-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625461    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/17bed36778b5b5f05ef4a5fbe8acd7b4-etcd-data\") pod \"etcd-nokubernetes-883000\" (UID: \"17bed36778b5b5f05ef4a5fbe8acd7b4\") " pod="kube-system/etcd-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625471    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a340141d0bec8f6e3745c06f051cc90-ca-certs\") pod \"kube-apiserver-nokubernetes-883000\" (UID: \"0a340141d0bec8f6e3745c06f051cc90\") " pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625482    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-k8s-certs\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625492    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-usr-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625501    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7abc356cbb90a86bad63d89ffb5681a9-kubeconfig\") pod \"kube-scheduler-nokubernetes-883000\" (UID: \"7abc356cbb90a86bad63d89ffb5681a9\") " pod="kube-system/kube-scheduler-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.404181    2106 apiserver.go:52] "Watching apiserver"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.422619    2106 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: E0806 08:49:25.528490    2106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-nokubernetes-883000\" already exists" pod="kube-system/kube-scheduler-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: E0806 08:49:25.528623    2106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-nokubernetes-883000\" already exists" pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: E0806 08:49:25.528996    2106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-nokubernetes-883000\" already exists" pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.554410    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-883000" podStartSLOduration=1.5543959200000002 podStartE2EDuration="1.55439592s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.545220737 +0000 UTC m=+1.210126146" watchObservedRunningTime="2024-08-06 08:49:25.55439592 +0000 UTC m=+1.219301327"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.567762    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-883000" podStartSLOduration=1.567709392 podStartE2EDuration="1.567709392s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.554704436 +0000 UTC m=+1.219609843" watchObservedRunningTime="2024-08-06 08:49:25.567709392 +0000 UTC m=+1.232614798"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.567830    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-883000" podStartSLOduration=1.567826618 podStartE2EDuration="1.567826618s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.567547323 +0000 UTC m=+1.232452736" watchObservedRunningTime="2024-08-06 08:49:25.567826618 +0000 UTC m=+1.232732032"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.628871    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-883000" podStartSLOduration=1.628842486 podStartE2EDuration="1.628842486s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.596632348 +0000 UTC m=+1.261537755" watchObservedRunningTime="2024-08-06 08:49:25.628842486 +0000 UTC m=+1.293747892"
	Aug 06 08:49:26 NoKubernetes-883000 kubelet[2106]: I0806 08:49:26.479220    2106 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 06 08:49:28 NoKubernetes-883000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 06 08:49:28 NoKubernetes-883000 systemd[1]: kubelet.service: Deactivated successfully.
	Aug 06 08:49:28 NoKubernetes-883000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:51:40.387474    8935 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.400631    8935 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.412008    8935 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.426413    8935 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.437182    8935 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.448193    8935 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.460115    8935 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:51:40.471042    8935 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-883000 -n NoKubernetes-883000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-883000 -n NoKubernetes-883000: exit status 2 (166.118671ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "NoKubernetes-883000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (194.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (180.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --driver=hyperkit 
E0806 01:52:41.509170    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:53:05.506528    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --driver=hyperkit : signal: killed (33.146734236s)

                                                
                                                
-- stdout --
	* [NoKubernetes-883000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-883000
	* Updating the running hyperkit "NoKubernetes-883000" VM ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --driver=hyperkit " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-883000 -n NoKubernetes-883000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-883000 -n NoKubernetes-883000: exit status 2 (144.130133ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestNoKubernetes/serial/Start FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestNoKubernetes/serial/Start]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-883000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p NoKubernetes-883000 logs -n 25: (2m26.952481673s)
helpers_test.go:252: TestNoKubernetes/serial/Start logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |       Profile       |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl cat kubelet                                |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | journalctl -xeu kubelet --all                        |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /etc/kubernetes/kubelet.conf                         |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /var/lib/kubelet/config.yaml                         |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl status docker --all                        |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl cat docker                                 |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /etc/docker/daemon.json                              |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo docker                        | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | system info                                          |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl status cri-docker                          |                     |         |         |                     |                     |
	|         | --all --full --no-pager                              |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl cat cri-docker                             |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /usr/lib/systemd/system/cri-docker.service           |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | cri-dockerd --version                                |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT |                     |
	|         | systemctl status containerd                          |                     |         |         |                     |                     |
	|         | --all --full --no-pager                              |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl cat containerd                             |                     |         |         |                     |                     |
	|         | --no-pager                                           |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /lib/systemd/system/containerd.service               |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo cat                           | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /etc/containerd/config.toml                          |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | containerd config dump                               |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT |                     |
	|         | systemctl status crio --all                          |                     |         |         |                     |                     |
	|         | --full --no-pager                                    |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo                               | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | systemctl cat crio --no-pager                        |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo find                          | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | /etc/crio -type f -exec sh -c                        |                     |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                     |         |         |                     |                     |
	| ssh     | -p kindnet-060000 sudo crio                          | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	|         | config                                               |                     |         |         |                     |                     |
	| delete  | -p kindnet-060000                                    | kindnet-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:52 PDT |
	| start   | -p flannel-060000                                    | flannel-060000      | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT | 06 Aug 24 01:53 PDT |
	|         | --memory=3072                                        |                     |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                     |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                     |         |         |                     |                     |
	|         | --cni=flannel                                        |                     |         |         |                     |                     |
	|         | --driver=hyperkit                                    |                     |         |         |                     |                     |
	| start   | -p NoKubernetes-883000                               | NoKubernetes-883000 | jenkins | v1.33.1 | 06 Aug 24 01:52 PDT |                     |
	|         | --no-kubernetes                                      |                     |         |         |                     |                     |
	|         | --driver=hyperkit                                    |                     |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 01:52:41
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 01:52:41.396866    9198 out.go:291] Setting OutFile to fd 1 ...
	I0806 01:52:41.397190    9198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:52:41.397193    9198 out.go:304] Setting ErrFile to fd 2...
	I0806 01:52:41.397196    9198 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 01:52:41.397406    9198 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 01:52:41.399249    9198 out.go:298] Setting JSON to false
	I0806 01:52:41.425153    9198 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6723,"bootTime":1722927638,"procs":443,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 01:52:41.425263    9198 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 01:52:41.447442    9198 out.go:177] * [NoKubernetes-883000] minikube v1.33.1 on Darwin 14.5
	I0806 01:52:41.488108    9198 notify.go:220] Checking for updates...
	I0806 01:52:41.509089    9198 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 01:52:41.549914    9198 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:52:41.591721    9198 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 01:52:41.611972    9198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 01:52:41.653792    9198 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 01:52:41.695932    9198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 01:52:41.717364    9198 config.go:182] Loaded profile config "NoKubernetes-883000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0806 01:52:41.717711    9198 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:41.717761    9198 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:41.727051    9198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55341
	I0806 01:52:41.727453    9198 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:41.727888    9198 main.go:141] libmachine: Using API Version  1
	I0806 01:52:41.727900    9198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:41.728123    9198 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:41.728272    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:41.728412    9198 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0806 01:52:41.728477    9198 start.go:1780] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I0806 01:52:41.728500    9198 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 01:52:41.728772    9198 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:41.728790    9198 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:41.737503    9198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55343
	I0806 01:52:41.737877    9198 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:41.738250    9198 main.go:141] libmachine: Using API Version  1
	I0806 01:52:41.738267    9198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:41.738477    9198 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:41.738606    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:41.767146    9198 out.go:177] * Using the hyperkit driver based on existing profile
	I0806 01:52:41.808951    9198 start.go:297] selected driver: hyperkit
	I0806 01:52:41.808959    9198 start.go:901] validating driver "hyperkit" against &{Name:NoKubernetes-883000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-883000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.22 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:52:41.809056    9198 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 01:52:41.809106    9198 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0806 01:52:41.809189    9198 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:52:41.809310    9198 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 01:52:41.818211    9198 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 01:52:41.822215    9198 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:41.822233    9198 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 01:52:41.825063    9198 cni.go:84] Creating CNI manager for ""
	I0806 01:52:41.825086    9198 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 01:52:41.825109    9198 start.go:1875] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I0806 01:52:41.825173    9198 start.go:340] cluster config:
	{Name:NoKubernetes-883000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-883000 Namespace:default AP
IServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.22 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 01:52:41.825270    9198 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 01:52:41.867114    9198 out.go:177] * Starting minikube without Kubernetes in cluster NoKubernetes-883000
	I0806 01:52:41.887871    9198 preload.go:131] Checking if preload exists for k8s version v0.0.0 and runtime docker
	W0806 01:52:41.944086    9198 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-docker-overlay2-amd64.tar.lz4 status code: 404
	I0806 01:52:41.944232    9198 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/NoKubernetes-883000/config.json ...
	I0806 01:52:41.944741    9198 start.go:360] acquireMachinesLock for NoKubernetes-883000: {Name:mk23fe223591838ba69a1052c4474834b6e8897d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0806 01:52:41.944806    9198 start.go:364] duration metric: took 56.375µs to acquireMachinesLock for "NoKubernetes-883000"
	I0806 01:52:41.944824    9198 start.go:96] Skipping create...Using existing machine configuration
	I0806 01:52:41.944831    9198 fix.go:54] fixHost starting: 
	I0806 01:52:41.945083    9198 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:41.945101    9198 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:41.953934    9198 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55345
	I0806 01:52:41.954269    9198 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:41.954635    9198 main.go:141] libmachine: Using API Version  1
	I0806 01:52:41.954647    9198 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:41.954874    9198 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:41.954985    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:41.955083    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetState
	I0806 01:52:41.955160    9198 main.go:141] libmachine: (NoKubernetes-883000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:52:41.955238    9198 main.go:141] libmachine: (NoKubernetes-883000) DBG | hyperkit pid from json: 8659
	I0806 01:52:41.956216    9198 fix.go:112] recreateIfNeeded on NoKubernetes-883000: state=Running err=<nil>
	W0806 01:52:41.956230    9198 fix.go:138] unexpected machine state, will restart: <nil>
	I0806 01:52:41.988579    9198 out.go:177] * Updating the running hyperkit "NoKubernetes-883000" VM ...
	I0806 01:52:43.426514    9177 kubeadm.go:310] [api-check] The API server is healthy after 4.501475003s
	I0806 01:52:43.437913    9177 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0806 01:52:43.444878    9177 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0806 01:52:43.461372    9177 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0806 01:52:43.461536    9177 kubeadm.go:310] [mark-control-plane] Marking the node flannel-060000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0806 01:52:43.467178    9177 kubeadm.go:310] [bootstrap-token] Using token: s4cpeg.3p3nrg23572x9xps
	I0806 01:52:43.507923    9177 out.go:204]   - Configuring RBAC rules ...
	I0806 01:52:43.508080    9177 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0806 01:52:43.510832    9177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0806 01:52:43.554199    9177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0806 01:52:43.555792    9177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0806 01:52:43.558992    9177 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0806 01:52:43.560892    9177 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0806 01:52:43.831876    9177 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0806 01:52:44.247176    9177 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0806 01:52:44.831635    9177 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0806 01:52:44.832204    9177 kubeadm.go:310] 
	I0806 01:52:44.832296    9177 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0806 01:52:44.832307    9177 kubeadm.go:310] 
	I0806 01:52:44.832369    9177 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0806 01:52:44.832374    9177 kubeadm.go:310] 
	I0806 01:52:44.832398    9177 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0806 01:52:44.832449    9177 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0806 01:52:44.832500    9177 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0806 01:52:44.832506    9177 kubeadm.go:310] 
	I0806 01:52:44.832545    9177 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0806 01:52:44.832552    9177 kubeadm.go:310] 
	I0806 01:52:44.832599    9177 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0806 01:52:44.832609    9177 kubeadm.go:310] 
	I0806 01:52:44.832658    9177 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0806 01:52:44.832721    9177 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0806 01:52:44.832773    9177 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0806 01:52:44.832779    9177 kubeadm.go:310] 
	I0806 01:52:44.832840    9177 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0806 01:52:44.832905    9177 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0806 01:52:44.832911    9177 kubeadm.go:310] 
	I0806 01:52:44.832983    9177 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token s4cpeg.3p3nrg23572x9xps \
	I0806 01:52:44.833069    9177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e \
	I0806 01:52:44.833088    9177 kubeadm.go:310] 	--control-plane 
	I0806 01:52:44.833095    9177 kubeadm.go:310] 
	I0806 01:52:44.833157    9177 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0806 01:52:44.833164    9177 kubeadm.go:310] 
	I0806 01:52:44.833223    9177 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token s4cpeg.3p3nrg23572x9xps \
	I0806 01:52:44.833309    9177 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a9443848bf4eec4ed2472133b31ffbc5b7ea765e7678d3f26186b34ad246967e 
	I0806 01:52:44.833776    9177 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0806 01:52:44.833789    9177 cni.go:84] Creating CNI manager for "flannel"
	I0806 01:52:44.880039    9177 out.go:177] * Configuring Flannel (Container Networking Interface) ...
	I0806 01:52:42.010169    9198 machine.go:94] provisionDockerMachine start ...
	I0806 01:52:42.010191    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.010526    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.010711    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.010917    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.011109    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.011331    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.011578    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.011908    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.011914    9198 main.go:141] libmachine: About to run SSH command:
	hostname
	I0806 01:52:42.063261    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-883000
	
	I0806 01:52:42.063273    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetMachineName
	I0806 01:52:42.063419    9198 buildroot.go:166] provisioning hostname "NoKubernetes-883000"
	I0806 01:52:42.063425    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetMachineName
	I0806 01:52:42.063542    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.063640    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.063751    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.063839    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.063940    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.064061    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.064222    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.064228    9198 main.go:141] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-883000 && echo "NoKubernetes-883000" | sudo tee /etc/hostname
	I0806 01:52:42.126573    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: NoKubernetes-883000
	
	I0806 01:52:42.126588    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.126721    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.126823    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.126926    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.127018    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.127145    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.127293    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.127302    9198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-883000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-883000/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-883000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0806 01:52:42.177263    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 01:52:42.177279    9198 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19370-944/.minikube CaCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19370-944/.minikube}
	I0806 01:52:42.177305    9198 buildroot.go:174] setting up certificates
	I0806 01:52:42.177313    9198 provision.go:84] configureAuth start
	I0806 01:52:42.177319    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetMachineName
	I0806 01:52:42.177456    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetIP
	I0806 01:52:42.177538    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.177625    9198 provision.go:143] copyHostCerts
	I0806 01:52:42.177697    9198 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem, removing ...
	I0806 01:52:42.177709    9198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem
	I0806 01:52:42.177859    9198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/ca.pem (1078 bytes)
	I0806 01:52:42.178082    9198 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem, removing ...
	I0806 01:52:42.178086    9198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem
	I0806 01:52:42.178166    9198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/cert.pem (1123 bytes)
	I0806 01:52:42.178361    9198 exec_runner.go:144] found /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem, removing ...
	I0806 01:52:42.178364    9198 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem
	I0806 01:52:42.178439    9198 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19370-944/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19370-944/.minikube/key.pem (1679 bytes)
	I0806 01:52:42.178632    9198 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-883000 san=[127.0.0.1 192.169.0.22 NoKubernetes-883000 localhost minikube]
	I0806 01:52:42.282913    9198 provision.go:177] copyRemoteCerts
	I0806 01:52:42.282970    9198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0806 01:52:42.282985    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.283146    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.283234    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.283321    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.283395    9198 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/NoKubernetes-883000/id_rsa Username:docker}
	I0806 01:52:42.315906    9198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0806 01:52:42.336298    9198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0806 01:52:42.356307    9198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0806 01:52:42.376231    9198 provision.go:87] duration metric: took 198.903586ms to configureAuth
	I0806 01:52:42.376241    9198 buildroot.go:189] setting minikube options for container-runtime
	I0806 01:52:42.376379    9198 config.go:182] Loaded profile config "NoKubernetes-883000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0806 01:52:42.376394    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.376529    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.376604    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.376684    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.376771    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.376846    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.376951    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.377070    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.377074    9198 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0806 01:52:42.427363    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0806 01:52:42.427370    9198 buildroot.go:70] root file system type: tmpfs
	I0806 01:52:42.427446    9198 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0806 01:52:42.427461    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.427587    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.427670    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.427757    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.427837    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.428004    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.428140    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.428182    9198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0806 01:52:42.488752    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0806 01:52:42.488772    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.488918    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.489010    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.489083    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.489160    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.489300    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.489438    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.489447    9198 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0806 01:52:42.544355    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0806 01:52:42.544363    9198 machine.go:97] duration metric: took 534.187281ms to provisionDockerMachine
	I0806 01:52:42.544374    9198 start.go:293] postStartSetup for "NoKubernetes-883000" (driver="hyperkit")
	I0806 01:52:42.544380    9198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0806 01:52:42.544388    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.544569    9198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0806 01:52:42.544590    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.544694    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.544820    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.544902    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.544963    9198 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/NoKubernetes-883000/id_rsa Username:docker}
	I0806 01:52:42.578192    9198 ssh_runner.go:195] Run: cat /etc/os-release
	I0806 01:52:42.581347    9198 info.go:137] Remote host: Buildroot 2023.02.9
	I0806 01:52:42.581356    9198 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/addons for local assets ...
	I0806 01:52:42.581452    9198 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19370-944/.minikube/files for local assets ...
	I0806 01:52:42.581627    9198 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem -> 14372.pem in /etc/ssl/certs
	I0806 01:52:42.581827    9198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0806 01:52:42.589333    9198 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/ssl/certs/14372.pem --> /etc/ssl/certs/14372.pem (1708 bytes)
	I0806 01:52:42.609357    9198 start.go:296] duration metric: took 64.977027ms for postStartSetup
	I0806 01:52:42.609377    9198 fix.go:56] duration metric: took 664.550375ms for fixHost
	I0806 01:52:42.609387    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.609523    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.609628    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.609703    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.609784    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.609899    9198 main.go:141] libmachine: Using SSH client type: native
	I0806 01:52:42.610038    9198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71dc0c0] 0x71dee20 <nil>  [] 0s} 192.169.0.22 22 <nil> <nil>}
	I0806 01:52:42.610042    9198 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0806 01:52:42.661995    9198 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722934362.798115342
	
	I0806 01:52:42.662001    9198 fix.go:216] guest clock: 1722934362.798115342
	I0806 01:52:42.662005    9198 fix.go:229] Guest: 2024-08-06 01:52:42.798115342 -0700 PDT Remote: 2024-08-06 01:52:42.609378 -0700 PDT m=+1.250572443 (delta=188.737342ms)
	I0806 01:52:42.662025    9198 fix.go:200] guest clock delta is within tolerance: 188.737342ms
	I0806 01:52:42.662035    9198 start.go:83] releasing machines lock for "NoKubernetes-883000", held for 717.21996ms
	I0806 01:52:42.662052    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.662178    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetIP
	I0806 01:52:42.662260    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.662553    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.662655    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .DriverName
	I0806 01:52:42.662713    9198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0806 01:52:42.662738    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.662778    9198 ssh_runner.go:195] Run: cat /version.json
	I0806 01:52:42.662786    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHHostname
	I0806 01:52:42.662821    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.662877    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHPort
	I0806 01:52:42.662956    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.662980    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHKeyPath
	I0806 01:52:42.663062    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.663071    9198 main.go:141] libmachine: (NoKubernetes-883000) Calling .GetSSHUsername
	I0806 01:52:42.663144    9198 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/NoKubernetes-883000/id_rsa Username:docker}
	I0806 01:52:42.663159    9198 sshutil.go:53] new ssh client: &{IP:192.169.0.22 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/NoKubernetes-883000/id_rsa Username:docker}
	I0806 01:52:42.736725    9198 ssh_runner.go:195] Run: systemctl --version
	I0806 01:52:42.741035    9198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 01:52:42.753042    9198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0806 01:52:42.757225    9198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0806 01:52:42.757263    9198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0806 01:52:42.764971    9198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0806 01:52:42.764982    9198 start.go:495] detecting cgroup driver to use...
	I0806 01:52:42.765069    9198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 01:52:42.780136    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0806 01:52:42.788918    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0806 01:52:42.797775    9198 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0806 01:52:42.797817    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0806 01:52:42.806830    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 01:52:42.815861    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0806 01:52:42.824820    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0806 01:52:42.834213    9198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0806 01:52:42.843483    9198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0806 01:52:42.853092    9198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0806 01:52:42.861227    9198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0806 01:52:42.869285    9198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:52:42.974212    9198 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0806 01:52:42.995224    9198 start.go:495] detecting cgroup driver to use...
	I0806 01:52:42.995296    9198 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0806 01:52:43.009875    9198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 01:52:43.022824    9198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0806 01:52:43.037861    9198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0806 01:52:43.049421    9198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0806 01:52:43.061040    9198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0806 01:52:43.076988    9198 ssh_runner.go:195] Run: which cri-dockerd
	I0806 01:52:43.080028    9198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0806 01:52:43.088529    9198 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0806 01:52:43.102813    9198 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0806 01:52:43.209079    9198 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0806 01:52:43.313153    9198 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0806 01:52:43.313222    9198 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0806 01:52:43.327601    9198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:52:43.427427    9198 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0806 01:52:44.902842    9177 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0806 01:52:44.908437    9177 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0806 01:52:44.908449    9177 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (4345 bytes)
	I0806 01:52:44.924297    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0806 01:52:45.229829    9177 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0806 01:52:45.229914    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:45.229934    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes flannel-060000 minikube.k8s.io/updated_at=2024_08_06T01_52_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=e92cb06692f5ea1ba801d10d148e5e92e807f9c8 minikube.k8s.io/name=flannel-060000 minikube.k8s.io/primary=true
	I0806 01:52:45.361961    9177 ops.go:34] apiserver oom_adj: -16
	I0806 01:52:45.362018    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:45.862259    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:46.362364    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:46.862908    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:47.363551    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:47.863980    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:48.362469    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:48.863068    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:49.363379    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:49.862715    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:50.363459    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:50.862131    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:51.362271    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:51.862622    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:52.362825    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:52.863650    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:53.362823    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:53.863658    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:54.362661    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:54.862969    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:55.362160    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:55.862832    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:56.363651    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:56.862080    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:57.363060    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:57.863344    9177 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0806 01:52:57.935650    9177 kubeadm.go:1113] duration metric: took 12.705829513s to wait for elevateKubeSystemPrivileges
	I0806 01:52:57.935677    9177 kubeadm.go:394] duration metric: took 23.077813916s to StartCluster
	I0806 01:52:57.935704    9177 settings.go:142] acquiring lock: {Name:mk7aec99dc6d69d6a2c18b35ff8bde3cddf78620 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:52:57.935804    9177 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 01:52:57.936419    9177 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/kubeconfig: {Name:mka547673b59bc4eb06e1f2c8130de31708dba29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 01:52:57.936666    9177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0806 01:52:57.936683    9177 start.go:235] Will wait 15m0s for node &{Name: IP:192.169.0.25 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0806 01:52:57.936729    9177 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0806 01:52:57.936772    9177 addons.go:69] Setting storage-provisioner=true in profile "flannel-060000"
	I0806 01:52:57.936784    9177 addons.go:69] Setting default-storageclass=true in profile "flannel-060000"
	I0806 01:52:57.936795    9177 addons.go:234] Setting addon storage-provisioner=true in "flannel-060000"
	I0806 01:52:57.936811    9177 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-060000"
	I0806 01:52:57.936812    9177 host.go:66] Checking if "flannel-060000" exists ...
	I0806 01:52:57.936820    9177 config.go:182] Loaded profile config "flannel-060000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 01:52:57.937101    9177 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:57.937101    9177 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:57.937117    9177 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:57.937118    9177 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:57.946615    9177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55366
	I0806 01:52:57.946858    9177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55368
	I0806 01:52:57.947085    9177 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:57.947185    9177 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:57.947439    9177 main.go:141] libmachine: Using API Version  1
	I0806 01:52:57.947449    9177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:57.947512    9177 main.go:141] libmachine: Using API Version  1
	I0806 01:52:57.947523    9177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:57.947694    9177 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:57.947715    9177 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:57.947813    9177 main.go:141] libmachine: (flannel-060000) Calling .GetState
	I0806 01:52:57.947887    9177 main.go:141] libmachine: (flannel-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:52:57.947980    9177 main.go:141] libmachine: (flannel-060000) DBG | hyperkit pid from json: 9187
	I0806 01:52:57.948109    9177 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:57.948125    9177 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:57.951292    9177 addons.go:234] Setting addon default-storageclass=true in "flannel-060000"
	I0806 01:52:57.951321    9177 host.go:66] Checking if "flannel-060000" exists ...
	I0806 01:52:57.951537    9177 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:57.951565    9177 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:57.957301    9177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55370
	I0806 01:52:57.957631    9177 out.go:177] * Verifying Kubernetes components...
	I0806 01:52:57.957721    9177 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:57.958117    9177 main.go:141] libmachine: Using API Version  1
	I0806 01:52:57.958128    9177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:57.958419    9177 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:57.958537    9177 main.go:141] libmachine: (flannel-060000) Calling .GetState
	I0806 01:52:57.958629    9177 main.go:141] libmachine: (flannel-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:52:57.958700    9177 main.go:141] libmachine: (flannel-060000) DBG | hyperkit pid from json: 9187
	I0806 01:52:57.959738    9177 main.go:141] libmachine: (flannel-060000) Calling .DriverName
	I0806 01:52:57.960351    9177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55372
	I0806 01:52:57.978592    9177 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0806 01:52:57.979016    9177 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:57.979358    9177 main.go:141] libmachine: Using API Version  1
	I0806 01:52:57.979370    9177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:57.979616    9177 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:57.979992    9177 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 01:52:57.980012    9177 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 01:52:57.989019    9177 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55374
	I0806 01:52:57.989366    9177 main.go:141] libmachine: () Calling .GetVersion
	I0806 01:52:57.989696    9177 main.go:141] libmachine: Using API Version  1
	I0806 01:52:57.989710    9177 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 01:52:57.989905    9177 main.go:141] libmachine: () Calling .GetMachineName
	I0806 01:52:57.990008    9177 main.go:141] libmachine: (flannel-060000) Calling .GetState
	I0806 01:52:57.990084    9177 main.go:141] libmachine: (flannel-060000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 01:52:57.990173    9177 main.go:141] libmachine: (flannel-060000) DBG | hyperkit pid from json: 9187
	I0806 01:52:57.991151    9177 main.go:141] libmachine: (flannel-060000) Calling .DriverName
	I0806 01:52:57.991270    9177 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0806 01:52:57.991277    9177 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0806 01:52:57.991285    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHHostname
	I0806 01:52:57.991365    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHPort
	I0806 01:52:57.991447    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHKeyPath
	I0806 01:52:57.991530    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHUsername
	I0806 01:52:57.991610    9177 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/flannel-060000/id_rsa Username:docker}
	I0806 01:52:58.015269    9177 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0806 01:52:58.052627    9177 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 01:52:58.052641    9177 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0806 01:52:58.052661    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHHostname
	I0806 01:52:58.052822    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHPort
	I0806 01:52:58.052932    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHKeyPath
	I0806 01:52:58.053034    9177 main.go:141] libmachine: (flannel-060000) Calling .GetSSHUsername
	I0806 01:52:58.053141    9177 sshutil.go:53] new ssh client: &{IP:192.169.0.25 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/flannel-060000/id_rsa Username:docker}
	I0806 01:52:58.065091    9177 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0806 01:52:58.156037    9177 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0806 01:52:58.212178    9177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0806 01:52:58.324018    9177 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0806 01:52:58.537925    9177 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0806 01:52:58.538708    9177 node_ready.go:35] waiting up to 15m0s for node "flannel-060000" to be "Ready" ...
	I0806 01:52:58.793269    9177 main.go:141] libmachine: Making call to close driver server
	I0806 01:52:58.793283    9177 main.go:141] libmachine: (flannel-060000) Calling .Close
	I0806 01:52:58.793269    9177 main.go:141] libmachine: Making call to close driver server
	I0806 01:52:58.793337    9177 main.go:141] libmachine: (flannel-060000) Calling .Close
	I0806 01:52:58.793454    9177 main.go:141] libmachine: Successfully made call to close driver server
	I0806 01:52:58.793465    9177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 01:52:58.793473    9177 main.go:141] libmachine: Making call to close driver server
	I0806 01:52:58.793479    9177 main.go:141] libmachine: (flannel-060000) Calling .Close
	I0806 01:52:58.793498    9177 main.go:141] libmachine: Successfully made call to close driver server
	I0806 01:52:58.793514    9177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 01:52:58.793549    9177 main.go:141] libmachine: Making call to close driver server
	I0806 01:52:58.793558    9177 main.go:141] libmachine: (flannel-060000) Calling .Close
	I0806 01:52:58.793628    9177 main.go:141] libmachine: Successfully made call to close driver server
	I0806 01:52:58.793637    9177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 01:52:58.793658    9177 main.go:141] libmachine: (flannel-060000) DBG | Closing plugin on server side
	I0806 01:52:58.793722    9177 main.go:141] libmachine: (flannel-060000) DBG | Closing plugin on server side
	I0806 01:52:58.793788    9177 main.go:141] libmachine: Successfully made call to close driver server
	I0806 01:52:58.793817    9177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 01:52:58.801495    9177 main.go:141] libmachine: Making call to close driver server
	I0806 01:52:58.801508    9177 main.go:141] libmachine: (flannel-060000) Calling .Close
	I0806 01:52:58.801698    9177 main.go:141] libmachine: Successfully made call to close driver server
	I0806 01:52:58.801707    9177 main.go:141] libmachine: Making call to close connection to plugin binary
	I0806 01:52:58.801714    9177 main.go:141] libmachine: (flannel-060000) DBG | Closing plugin on server side
	I0806 01:52:58.843940    9177 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0806 01:52:58.881053    9177 addons.go:510] duration metric: took 944.323859ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0806 01:52:59.043259    9177 kapi.go:214] "coredns" deployment in "kube-system" namespace and "flannel-060000" context rescaled to 1 replicas
	I0806 01:53:00.541435    9177 node_ready.go:53] node "flannel-060000" has status "Ready":"False"
	I0806 01:53:02.543495    9177 node_ready.go:53] node "flannel-060000" has status "Ready":"False"
	I0806 01:53:05.040634    9177 node_ready.go:53] node "flannel-060000" has status "Ready":"False"
	I0806 01:53:07.042158    9177 node_ready.go:53] node "flannel-060000" has status "Ready":"False"
	I0806 01:53:07.541865    9177 node_ready.go:49] node "flannel-060000" has status "Ready":"True"
	I0806 01:53:07.541881    9177 node_ready.go:38] duration metric: took 9.003176522s for node "flannel-060000" to be "Ready" ...
	I0806 01:53:07.541888    9177 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 01:53:07.547221    9177 pod_ready.go:78] waiting up to 15m0s for pod "coredns-7db6d8ff4d-tfb8q" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:09.551298    9177 pod_ready.go:102] pod "coredns-7db6d8ff4d-tfb8q" in "kube-system" namespace has status "Ready":"False"
	I0806 01:53:10.551296    9177 pod_ready.go:92] pod "coredns-7db6d8ff4d-tfb8q" in "kube-system" namespace has status "Ready":"True"
	I0806 01:53:10.551309    9177 pod_ready.go:81] duration metric: took 3.004084795s for pod "coredns-7db6d8ff4d-tfb8q" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.551315    9177 pod_ready.go:78] waiting up to 15m0s for pod "etcd-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.554228    9177 pod_ready.go:92] pod "etcd-flannel-060000" in "kube-system" namespace has status "Ready":"True"
	I0806 01:53:10.554237    9177 pod_ready.go:81] duration metric: took 2.917462ms for pod "etcd-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.554243    9177 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.556809    9177 pod_ready.go:92] pod "kube-apiserver-flannel-060000" in "kube-system" namespace has status "Ready":"True"
	I0806 01:53:10.556820    9177 pod_ready.go:81] duration metric: took 2.572062ms for pod "kube-apiserver-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.556826    9177 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.559711    9177 pod_ready.go:92] pod "kube-controller-manager-flannel-060000" in "kube-system" namespace has status "Ready":"True"
	I0806 01:53:10.559720    9177 pod_ready.go:81] duration metric: took 2.889657ms for pod "kube-controller-manager-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.559726    9177 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-bbm2p" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.562206    9177 pod_ready.go:92] pod "kube-proxy-bbm2p" in "kube-system" namespace has status "Ready":"True"
	I0806 01:53:10.562214    9177 pod_ready.go:81] duration metric: took 2.483064ms for pod "kube-proxy-bbm2p" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.562219    9177 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.951436    9177 pod_ready.go:92] pod "kube-scheduler-flannel-060000" in "kube-system" namespace has status "Ready":"True"
	I0806 01:53:10.951451    9177 pod_ready.go:81] duration metric: took 389.218073ms for pod "kube-scheduler-flannel-060000" in "kube-system" namespace to be "Ready" ...
	I0806 01:53:10.951460    9177 pod_ready.go:38] duration metric: took 3.409573184s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0806 01:53:10.951483    9177 api_server.go:52] waiting for apiserver process to appear ...
	I0806 01:53:10.951546    9177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 01:53:10.963606    9177 api_server.go:72] duration metric: took 13.026943827s to wait for apiserver process to appear ...
	I0806 01:53:10.963617    9177 api_server.go:88] waiting for apiserver healthz status ...
	I0806 01:53:10.963632    9177 api_server.go:253] Checking apiserver healthz at https://192.169.0.25:8443/healthz ...
	I0806 01:53:10.967343    9177 api_server.go:279] https://192.169.0.25:8443/healthz returned 200:
	ok
	I0806 01:53:10.967909    9177 api_server.go:141] control plane version: v1.30.3
	I0806 01:53:10.967921    9177 api_server.go:131] duration metric: took 4.299142ms to wait for apiserver health ...
	I0806 01:53:10.967926    9177 system_pods.go:43] waiting for kube-system pods to appear ...
	I0806 01:53:11.155166    9177 system_pods.go:59] 7 kube-system pods found
	I0806 01:53:11.155184    9177 system_pods.go:61] "coredns-7db6d8ff4d-tfb8q" [00dd230b-06a4-4598-ad24-1ce421b94a52] Running
	I0806 01:53:11.155187    9177 system_pods.go:61] "etcd-flannel-060000" [d84342e9-2429-473d-ad37-731a10173583] Running
	I0806 01:53:11.155190    9177 system_pods.go:61] "kube-apiserver-flannel-060000" [ee290c99-fbd3-471e-9ca0-fe83ea03f8fa] Running
	I0806 01:53:11.155194    9177 system_pods.go:61] "kube-controller-manager-flannel-060000" [91317347-dba4-44b9-9eac-b8d23ef96b9a] Running
	I0806 01:53:11.155196    9177 system_pods.go:61] "kube-proxy-bbm2p" [616e7414-480e-4642-9156-39dd0450e153] Running
	I0806 01:53:11.155199    9177 system_pods.go:61] "kube-scheduler-flannel-060000" [26b6707b-d431-481e-8902-82406bd72a77] Running
	I0806 01:53:11.155202    9177 system_pods.go:61] "storage-provisioner" [49967b57-3171-493e-bd6b-c9449d53b11c] Running
	I0806 01:53:11.155206    9177 system_pods.go:74] duration metric: took 187.277083ms to wait for pod list to return data ...
	I0806 01:53:11.155213    9177 default_sa.go:34] waiting for default service account to be created ...
	I0806 01:53:11.351491    9177 default_sa.go:45] found service account: "default"
	I0806 01:53:11.351508    9177 default_sa.go:55] duration metric: took 196.290744ms for default service account to be created ...
	I0806 01:53:11.351518    9177 system_pods.go:116] waiting for k8s-apps to be running ...
	I0806 01:53:11.552327    9177 system_pods.go:86] 7 kube-system pods found
	I0806 01:53:11.552340    9177 system_pods.go:89] "coredns-7db6d8ff4d-tfb8q" [00dd230b-06a4-4598-ad24-1ce421b94a52] Running
	I0806 01:53:11.552344    9177 system_pods.go:89] "etcd-flannel-060000" [d84342e9-2429-473d-ad37-731a10173583] Running
	I0806 01:53:11.552347    9177 system_pods.go:89] "kube-apiserver-flannel-060000" [ee290c99-fbd3-471e-9ca0-fe83ea03f8fa] Running
	I0806 01:53:11.552351    9177 system_pods.go:89] "kube-controller-manager-flannel-060000" [91317347-dba4-44b9-9eac-b8d23ef96b9a] Running
	I0806 01:53:11.552375    9177 system_pods.go:89] "kube-proxy-bbm2p" [616e7414-480e-4642-9156-39dd0450e153] Running
	I0806 01:53:11.552379    9177 system_pods.go:89] "kube-scheduler-flannel-060000" [26b6707b-d431-481e-8902-82406bd72a77] Running
	I0806 01:53:11.552382    9177 system_pods.go:89] "storage-provisioner" [49967b57-3171-493e-bd6b-c9449d53b11c] Running
	I0806 01:53:11.552392    9177 system_pods.go:126] duration metric: took 200.870556ms to wait for k8s-apps to be running ...
	I0806 01:53:11.552401    9177 system_svc.go:44] waiting for kubelet service to be running ....
	I0806 01:53:11.552449    9177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 01:53:11.564003    9177 system_svc.go:56] duration metric: took 11.59674ms WaitForService to wait for kubelet
	I0806 01:53:11.564017    9177 kubeadm.go:582] duration metric: took 13.627357061s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0806 01:53:11.564036    9177 node_conditions.go:102] verifying NodePressure condition ...
	I0806 01:53:11.752396    9177 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0806 01:53:11.752417    9177 node_conditions.go:123] node cpu capacity is 2
	I0806 01:53:11.752430    9177 node_conditions.go:105] duration metric: took 188.39016ms to run NodePressure ...
	I0806 01:53:11.752441    9177 start.go:241] waiting for startup goroutines ...
	I0806 01:53:11.752449    9177 start.go:246] waiting for cluster config update ...
	I0806 01:53:11.752461    9177 start.go:255] writing updated cluster config ...
	I0806 01:53:11.753717    9177 ssh_runner.go:195] Run: rm -f paused
	I0806 01:53:11.798229    9177 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0806 01:53:11.839981    9177 out.go:177] * Done! kubectl is now configured to use "flannel-060000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3'"
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="error getting RW layer size for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7'"
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="error getting RW layer size for container ID '5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32'"
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="error getting RW layer size for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:53:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:53:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14'"
	Aug 06 08:53:40 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	Aug 06 08:53:40 NoKubernetes-883000 dockerd[3767]: time="2024-08-06T08:53:40.945094860Z" level=info msg="Starting up"
	Aug 06 08:54:40 NoKubernetes-883000 dockerd[3767]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 06 08:54:40 NoKubernetes-883000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="error getting RW layer size for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '60fd17d9a809019cd55339b94a3c98f4569c26a9c46699a3c3bde6f82c5ce0a3'"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="error getting RW layer size for container ID '5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '5bdab6d0a04c0d6d32149324abcb0988043e6f0f00ff73ffc8e9c8bf782abc32'"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="error getting RW layer size for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1c10df0681379f4f9494269eec5e242a2f9181e9a7b81cba29227970bd2b2d14'"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:54:40 NoKubernetes-883000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="error getting RW layer size for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Aug 06 08:54:40 NoKubernetes-883000 cri-dockerd[1170]: time="2024-08-06T08:54:40Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'afab8cceafa6529dcfa145e07c2f3b79f7dc13b45fb24052db4cbfa1c243f7e7'"
	Aug 06 08:54:40 NoKubernetes-883000 systemd[1]: Failed to start Docker Application Container Engine.
	Aug 06 08:54:41 NoKubernetes-883000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
	Aug 06 08:54:41 NoKubernetes-883000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 06 08:54:41 NoKubernetes-883000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-08-06T08:54:41Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/dockershim.sock: connect: no such file or directory\""
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v0.0.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	sudo: /var/lib/minikube/binaries/v0.0.0/kubectl: command not found
	
	
	==> dmesg <==
	[  +0.117074] systemd-fstab-generator[894]: Ignoring "noauto" option for root device
	[  +0.136119] systemd-fstab-generator[908]: Ignoring "noauto" option for root device
	[  +0.058779] kauditd_printk_skb: 139 callbacks suppressed
	[  +2.382518] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.106273] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.106414] systemd-fstab-generator[1147]: Ignoring "noauto" option for root device
	[  +0.140634] systemd-fstab-generator[1162]: Ignoring "noauto" option for root device
	[  +4.301732] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.064283] kauditd_printk_skb: 136 callbacks suppressed
	[  +2.913490] systemd-fstab-generator[1512]: Ignoring "noauto" option for root device
	[  +3.585641] systemd-fstab-generator[1688]: Ignoring "noauto" option for root device
	[  +0.054828] kauditd_printk_skb: 70 callbacks suppressed
	[  +6.971817] systemd-fstab-generator[2098]: Ignoring "noauto" option for root device
	[  +0.097112] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.121017] systemd-fstab-generator[2160]: Ignoring "noauto" option for root device
	[ +12.965816] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.217910] systemd-fstab-generator[2599]: Ignoring "noauto" option for root device
	[  +0.239962] systemd-fstab-generator[2635]: Ignoring "noauto" option for root device
	[  +0.105102] systemd-fstab-generator[2647]: Ignoring "noauto" option for root device
	[  +0.115417] systemd-fstab-generator[2661]: Ignoring "noauto" option for root device
	[Aug 6 08:52] systemd-fstab-generator[3519]: Ignoring "noauto" option for root device
	[  +0.056091] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.180110] systemd-fstab-generator[3552]: Ignoring "noauto" option for root device
	[  +0.101535] systemd-fstab-generator[3564]: Ignoring "noauto" option for root device
	[  +0.120365] systemd-fstab-generator[3578]: Ignoring "noauto" option for root device
	
	
	==> kernel <==
	 08:55:41 up 6 min,  0 users,  load average: 0.00, 0.02, 0.00
	Linux NoKubernetes-883000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.553930    2106 topology_manager.go:215] "Topology Admit Handler" podUID="17bed36778b5b5f05ef4a5fbe8acd7b4" podNamespace="kube-system" podName="etcd-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625360    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-kubeconfig\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625390    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a340141d0bec8f6e3745c06f051cc90-k8s-certs\") pod \"kube-apiserver-nokubernetes-883000\" (UID: \"0a340141d0bec8f6e3745c06f051cc90\") " pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625409    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a340141d0bec8f6e3745c06f051cc90-usr-share-ca-certificates\") pod \"kube-apiserver-nokubernetes-883000\" (UID: \"0a340141d0bec8f6e3745c06f051cc90\") " pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625425    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-ca-certs\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625441    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-flexvolume-dir\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625452    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/17bed36778b5b5f05ef4a5fbe8acd7b4-etcd-certs\") pod \"etcd-nokubernetes-883000\" (UID: \"17bed36778b5b5f05ef4a5fbe8acd7b4\") " pod="kube-system/etcd-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625461    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/17bed36778b5b5f05ef4a5fbe8acd7b4-etcd-data\") pod \"etcd-nokubernetes-883000\" (UID: \"17bed36778b5b5f05ef4a5fbe8acd7b4\") " pod="kube-system/etcd-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625471    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a340141d0bec8f6e3745c06f051cc90-ca-certs\") pod \"kube-apiserver-nokubernetes-883000\" (UID: \"0a340141d0bec8f6e3745c06f051cc90\") " pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625482    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-k8s-certs\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625492    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b8ccb3a20d980f06b1ffd004b127af11-usr-share-ca-certificates\") pod \"kube-controller-manager-nokubernetes-883000\" (UID: \"b8ccb3a20d980f06b1ffd004b127af11\") " pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:24 NoKubernetes-883000 kubelet[2106]: I0806 08:49:24.625501    2106 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7abc356cbb90a86bad63d89ffb5681a9-kubeconfig\") pod \"kube-scheduler-nokubernetes-883000\" (UID: \"7abc356cbb90a86bad63d89ffb5681a9\") " pod="kube-system/kube-scheduler-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.404181    2106 apiserver.go:52] "Watching apiserver"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.422619    2106 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: E0806 08:49:25.528490    2106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-scheduler-nokubernetes-883000\" already exists" pod="kube-system/kube-scheduler-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: E0806 08:49:25.528623    2106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-nokubernetes-883000\" already exists" pod="kube-system/kube-controller-manager-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: E0806 08:49:25.528996    2106 kubelet.go:1937] "Failed creating a mirror pod for" err="pods \"kube-apiserver-nokubernetes-883000\" already exists" pod="kube-system/kube-apiserver-nokubernetes-883000"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.554410    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-nokubernetes-883000" podStartSLOduration=1.5543959200000002 podStartE2EDuration="1.55439592s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.545220737 +0000 UTC m=+1.210126146" watchObservedRunningTime="2024-08-06 08:49:25.55439592 +0000 UTC m=+1.219301327"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.567762    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-nokubernetes-883000" podStartSLOduration=1.567709392 podStartE2EDuration="1.567709392s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.554704436 +0000 UTC m=+1.219609843" watchObservedRunningTime="2024-08-06 08:49:25.567709392 +0000 UTC m=+1.232614798"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.567830    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-nokubernetes-883000" podStartSLOduration=1.567826618 podStartE2EDuration="1.567826618s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.567547323 +0000 UTC m=+1.232452736" watchObservedRunningTime="2024-08-06 08:49:25.567826618 +0000 UTC m=+1.232732032"
	Aug 06 08:49:25 NoKubernetes-883000 kubelet[2106]: I0806 08:49:25.628871    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-nokubernetes-883000" podStartSLOduration=1.628842486 podStartE2EDuration="1.628842486s" podCreationTimestamp="2024-08-06 08:49:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-06 08:49:25.596632348 +0000 UTC m=+1.261537755" watchObservedRunningTime="2024-08-06 08:49:25.628842486 +0000 UTC m=+1.293747892"
	Aug 06 08:49:26 NoKubernetes-883000 kubelet[2106]: I0806 08:49:26.479220    2106 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 06 08:49:28 NoKubernetes-883000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 06 08:49:28 NoKubernetes-883000 systemd[1]: kubelet.service: Deactivated successfully.
	Aug 06 08:49:28 NoKubernetes-883000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0806 01:53:40.790891    9226 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.46/containers/json?all=1&filters=%7B%22name%22%3A%7B%22k8s_kube-apiserver%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
	E0806 01:54:40.839724    9226 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:54:40.852328    9226 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:54:40.864980    9226 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:54:40.875315    9226 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:54:40.886601    9226 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:54:40.898858    9226 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0806 01:54:40.909386    9226 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-883000 -n NoKubernetes-883000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p NoKubernetes-883000 -n NoKubernetes-883000: exit status 2 (153.294491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "NoKubernetes-883000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (180.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (7201.667s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-375000 image list --format=json
E0806 02:03:22.453844    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 02:03:35.053159    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.058380    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.070561    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.090698    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.132872    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.214262    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.375459    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:35.697724    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:36.340013    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:37.620850    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:39.594164    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/flannel-060000/client.crt: no such file or directory
E0806 02:03:40.181536    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:45.303215    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/calico-060000/client.crt: no such file or directory
E0806 02:03:47.373697    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/bridge-060000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (53m51s)
	TestNetworkPlugins/group (3m19s)
	TestStartStop (15m36s)
	TestStartStop/group/no-preload (3m19s)
	TestStartStop/group/no-preload/serial (3m19s)
	TestStartStop/group/no-preload/serial/SecondStart (1m28s)
	TestStartStop/group/old-k8s-version (4m41s)
	TestStartStop/group/old-k8s-version/serial (4m41s)
	TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (30s)

                                                
                                                
goroutine 3842 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000658d00, 0xc00076fbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000010558, {0xfdb1d00, 0x2a, 0x2a}, {0xb881825?, 0xd3bb08d?, 0xfdd4d00?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0006cc0a0)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0006cc0a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0007a8480)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 159 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 158
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3797 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d65700, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3760
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 168 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013d68a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2672 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001580000, 0xea1f5c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2250
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 25 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1141 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 24
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 3315 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009fc840)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3293
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2943 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013daf60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2942
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 935 [chan receive, 106 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d65200, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 157 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000906610, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013d6780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000906640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c36000, {0xea2b620, 0xc00094a180}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c36000, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 158 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000504f50, 0xc000bf7f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x0?, 0xc000504f50, 0xc000504f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000504fd0?, 0xbdbdce5?, 0xc0013d68a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 169
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 909 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000095f50, 0xc000c66f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x11?, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc000659380?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000095fd0?, 0xb93b9a4?, 0xc000ba15c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 935
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3779 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d656d0, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001916120)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d65700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00006b710, {0xea2b620, 0xc0014f0e70}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00006b710, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3797
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 169 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000906640, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 167
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2260 [chan receive, 4 minutes]:
testing.(*testContext).waitParallel(0xc0007d2230)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc0013c91e0, 0xc002119e90)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2155
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3533 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001571080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3532
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2839 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0009069c0, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2837
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 939 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc001490780, 0xc0015e9d40)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 938
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3735 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ea5560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3719
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2155 [chan receive, 55 minutes]:
testing.(*T).Run(0xc00073c680, {0xd36154a?, 0x36415adb844?}, 0xc002119e90)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00073c680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00073c680, 0xea1f418)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2944 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00093ecc0, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2942
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 934 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001636a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 878
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2235 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc0009066d0, 0x1c)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ea4a80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000906700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013f0d10, {0xea2b620, 0xc0014e42d0}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013f0d10, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2216
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2719 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000c4ef50, 0xc000c4ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x40?, 0xc000c4ef50, 0xc000c4ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc00073d520?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000c4efd0?, 0xb93b9a4?, 0xc001b12240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2724
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3861 [select]:
os/exec.(*Cmd).watchCtx(0xc0017c2600, 0xc001dde900)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3858
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3781 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3780
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3088 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0xa0?, 0xc000095f50, 0xc000095f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc001e06820?, 0xc001e06820?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xb93b945?, 0xc001b0c000?, 0xc0006728a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3090
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3219 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001af0390, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001dcd2c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001af03c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0020fe830, {0xea2b620, 0xc001e3a750}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0020fe830, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3205
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3723 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00087eb50, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ea5440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00087eb80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0007943d0, {0xea2b620, 0xc0015ea750}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0007943d0, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3736
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3417 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d64780, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3429
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 745 [IO wait, 111 minutes]:
internal/poll.runtime_pollWait(0x576b1238, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00090f880?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc00090f880)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc00090f880)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0007c6060)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0007c6060)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc00065c0f0, {0xea422f0, 0xc0007c6060})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc00065c0f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc00073c9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 742
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 1180 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc001c50c00, 0xc001c8a7e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1179
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3087 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc0008839d0, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001dd7260)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000883a00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006b8050, {0xea2b620, 0xc002132030}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006b8050, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3090
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3330 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000c4f750, 0xc000c4f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x0?, 0xc000c4f750, 0xc000c4f798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc0015b6601?, 0xc000058c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xc0017c2101?, 0xc000058c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1164 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc001bca900, 0xc001bce360)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 869
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2819 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc000906990, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013d7680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0009069c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013f0010, {0xea2b620, 0xc0014de060}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013f0010, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2839
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2724 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00093ec00, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2714
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3105 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3088
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1254 [select, 106 minutes]:
net/http.(*persistConn).writeLoop(0xc001cd4360)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1269
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 910 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 909
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 908 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d651d0, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001636900)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d65200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013ea860, {0xea2b620, 0xc001500750}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013ea860, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 935
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3090 [chan receive, 10 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000883a00, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3069
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3725 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3724
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3221 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3220
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2821 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2820
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3860 [IO wait]:
internal/poll.runtime_pollWait(0x576b0e58, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001dcdaa0?, 0xc00153d000?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001dcdaa0, {0xc00153d000, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000900408, {0xc00153d000?, 0xc00073e008?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001b806c0, {0xea2a038, 0xc001c80070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xea2a178, 0xc001b806c0}, {0xea2a038, 0xc001c80070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0xea2a178, 0xc001b806c0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc0013deec0?, {0xea2a178?, 0xc001b806c0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xea2a178, 0xc001b806c0}, {0xea2a0f8, 0xc000900408}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000225680?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3858
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2250 [chan receive, 17 minutes]:
testing.(*T).Run(0xc0015801a0, {0xd36154a?, 0xb8f4d73?}, 0xea1f5c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0015801a0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0015801a0, 0xea1f460)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2236 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000c4ef50, 0xc000bf3f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x40?, 0xc000c4ef50, 0xc000c4ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc00073d520?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000c4efd0?, 0xb93b9a4?, 0xc001b12240?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2216
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3859 [IO wait]:
internal/poll.runtime_pollWait(0x576b0c68, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001dcd6e0?, 0xc00153ce00?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001dcd6e0, {0xc00153ce00, 0x200, 0x200})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0009003e0, {0xc00153ce00?, 0xb8f550d?, 0x0?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001b80690, {0xea2a038, 0xc001c80068})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xea2a178, 0xc001b80690}, {0xea2a038, 0xc001c80068}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xfce5a20?, {0xea2a178, 0xc001b80690})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x7ff7b574f242?, {0xea2a178?, 0xc001b80690?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xea2a178, 0xc001b80690}, {0xea2a0f8, 0xc0009003e0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000656800?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3858
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 2237 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2236
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3632 [chan receive]:
testing.(*T).Run(0xc0017e6000, {0xd38cc9a?, 0x60400000004?}, 0xc000656800)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0017e6000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0017e6000, 0xc001b24e00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2689
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3089 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001dd7380)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3069
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2215 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ea4ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1253 [select, 106 minutes]:
net/http.(*persistConn).readLoop(0xc001cd4360)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1269
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2838 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0013d77a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2837
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1115 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b16780, 0xc001b124e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1114
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2216 [chan receive, 55 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000906700, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2200
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3331 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3330
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2723 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001dcd5c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2714
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3537 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc0013e3750, 0xc0013e3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0xc0?, 0xc0013e3750, 0xc0013e3798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc0015b6ea0?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013e37d0?, 0xb93b9a4?, 0xc0013e37a8?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3534
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2956 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00093ec90, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0013dad20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00093ecc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00076eb10, {0xea2b620, 0xc0015e4210}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00076eb10, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2944
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3205 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001af03c0, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3199
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2689 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001580340, {0xd362b9a?, 0x0?}, 0xc001b24e00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001580340)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001580340, 0xc00093e400)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2672
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2718 [sync.Cond.Wait, 4 minutes]:
sync.runtime_notifyListWait(0xc00093ebd0, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001dcd4a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00093ec00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001bc8dc0, {0xea2b620, 0xc0014dec00}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001bc8dc0, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2724
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2691 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0007d2230)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001581380)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001581380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001581380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001581380, 0xc00093e480)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2672
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2690 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0007d2230)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015811e0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015811e0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015811e0, 0xc00093e440)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2672
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2692 [chan receive, 4 minutes]:
testing.(*T).Run(0xc001581520, {0xd362b9a?, 0x0?}, 0xc0007a9280)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001581520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc001581520, 0xc00093e540)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2672
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2693 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0007d2230)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0015816c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0015816c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0015816c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0015816c0, 0xc00093e580)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2672
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2694 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc0007d2230)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001581860)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001581860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001581860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc001581860, 0xc00093e6c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2672
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3204 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001dcd3e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3199
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3329 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc00093fa90, 0xf)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0009fc6c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00093fac0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c364b0, {0xea2b620, 0xc00138ce70}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000c364b0, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3316
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3433 [sync.Cond.Wait, 2 minutes]:
sync.runtime_notifyListWait(0xc001d64750, 0xe)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000c352c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d64780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006f9b70, {0xea2b620, 0xc001b810b0}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006f9b70, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3417
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2720 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2719
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2820 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000503f50, 0xc000503f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0xa0?, 0xc000503f50, 0xc000503f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc00073d860?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000503fd0?, 0xb93b9a4?, 0xc0015e85a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2839
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2957 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc001863f50, 0xc001863f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x60?, 0xc001863f50, 0xc001863f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0x632f6f692e736574?, 0x65732e6769666e6f?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001863fd0?, 0xb93b9a4?, 0xc001dfd260?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2944
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2958 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2957
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3534 [chan receive, 6 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001af12c0, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3532
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3220 [select, 4 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc001863750, 0xc001863798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x98?, 0xc001863750, 0xc001863798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc0015b6d00?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0018637d0?, 0xb93b9a4?, 0xc00138d320?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3205
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3538 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3537
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3730 [chan receive, 2 minutes]:
testing.(*T).Run(0xc0017e6680, {0xd36e6c5?, 0x60400000004?}, 0xc000656900)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0017e6680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0017e6680, 0xc0007a9280)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2692
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3316 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00093fac0, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3293
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3416 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc000c353e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3429
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3520 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001af1290, 0xc)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xe513860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001570f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001af12c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013ec0d0, {0xea2b620, 0xc001fff3e0}, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013ec0d0, 0x3b9aca00, 0x0, 0x1, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3534
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3434 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc000504750, 0xc000504798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x60?, 0xc000504750, 0xc000504798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xbd47016?, 0xc0017c2000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0005047d0?, 0xb93b9a4?, 0xc0015e9860?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3417
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3435 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3434
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3736 [chan receive, 2 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00087eb80, 0xc000058c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3719
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3780 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc0013ddf50, 0xc0013ddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0xe0?, 0xc0013ddf50, 0xc0013ddf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc0017e6ea0?, 0xb8f56a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0013ddfd0?, 0xb93b9a4?, 0xc001b139e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3797
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3796 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001916240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3760
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3858 [syscall]:
syscall.syscall6(0xc001b81f80?, 0x1000000000010?, 0x10000000019?, 0x106ffa68?, 0x90?, 0x106f6108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc00076d6c8?, 0xb7c20c5?, 0x90?, 0xe98b980?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xb8f29e5?, 0xc00076d6fc, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc0013f9950)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017c2600)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0017c2600)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0017e6b60, 0xc0017c2600)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.testPulledImages({0xea4f210, 0xc0004f84d0}, 0xc0017e6b60, {0xc001d16678, 0x16}, {0xd36550a, 0x7})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:359 +0x1ce
k8s.io/minikube/test/integration.validateKubernetesImages({0xea4f210, 0xc0004f84d0}, 0xc0017e6b60, {0xc001d16678, 0x16}, {0x18f0f6c801da8758?, 0xc001da8760?}, {0xd36550a, 0x7}, {0xc001c1c300, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:304 +0x75
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0017e6b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0017e6b60, 0xc000656800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3632
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3813 [syscall, 2 minutes]:
syscall.syscall6(0xc001e3bf80?, 0x1000000000010?, 0x10000000019?, 0x57188908?, 0x90?, 0x106f6108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0015b4b48?, 0xb7c20c5?, 0x90?, 0xe98b980?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xb8f29e5?, 0xc0015b4b7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc000ba6840)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc001490c00)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc001490c00)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc0017e7520, 0xc001490c00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0xea4f210, 0xc0003e2310}, 0xc0017e7520, {0xc001d16fd8, 0x11}, {0x85a61c800504f58?, 0xc000504f60?}, {0xb8f4d73?, 0xb84cdcf?}, {0xc00050c000, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc0017e7520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc0017e7520, 0xc000656900)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3730
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3724 [select, 2 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0xea4f3d0, 0xc000058c60}, 0xc001861750, 0xc001861798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0xea4f3d0, 0xc000058c60}, 0x0?, 0xc001861750, 0xc001861798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0xea4f3d0?, 0xc000058c60?}, 0xc00042a101?, 0xc000058c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0xc001b0c101?, 0xc000058c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3736
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3815 [IO wait]:
internal/poll.runtime_pollWait(0x576b0980, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001636060?, 0xc0016e7cff?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001636060, {0xc0016e7cff, 0x1e301, 0x1e301})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001c806d8, {0xc0016e7cff?, 0xc000231880?, 0x20000?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001e3b020, {0xea2a038, 0xc000900f90})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xea2a178, 0xc001e3b020}, {0xea2a038, 0xc000900f90}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000c4fe78?, {0xea2a178, 0xc001e3b020})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xc000c4ff38?, {0xea2a178?, 0xc001e3b020?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xea2a178, 0xc001e3b020}, {0xea2a0f8, 0xc001c806d8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001ddf1a0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3813
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3816 [select, 2 minutes]:
os/exec.(*Cmd).watchCtx(0xc001490c00, 0xc000059f20)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3813
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3814 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x576b05a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000c35f20?, 0xc00159f493?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000c35f20, {0xc00159f493, 0x36d, 0x36d})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc001c806c0, {0xc00159f493?, 0xb939b3a?, 0x226?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001e3aff0, {0xea2a038, 0xc000900f88})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0xea2a178, 0xc001e3aff0}, {0xea2a038, 0xc000900f88}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xfce5a20?, {0xea2a178, 0xc001e3aff0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0xf?, {0xea2a178?, 0xc001e3aff0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0xea2a178, 0xc001e3aff0}, {0xea2a0f8, 0xc001c806c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc000656900?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3813
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                    

Test pass (178/222)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 19.2
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.25
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 15.48
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-rc.0/json-events 17.74
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 0.91
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 215.19
38 TestAddons/serial/Volcano 39.26
40 TestAddons/serial/GCPAuth/Namespaces 0.1
42 TestAddons/parallel/Registry 15.96
43 TestAddons/parallel/Ingress 20.77
44 TestAddons/parallel/InspektorGadget 10.67
45 TestAddons/parallel/MetricsServer 5.55
46 TestAddons/parallel/HelmTiller 11.02
48 TestAddons/parallel/CSI 51.08
49 TestAddons/parallel/Headlamp 19.4
50 TestAddons/parallel/CloudSpanner 5.36
51 TestAddons/parallel/LocalPath 48.48
52 TestAddons/parallel/NvidiaDevicePlugin 5.34
53 TestAddons/parallel/Yakd 10.47
54 TestAddons/StoppedEnableDisable 5.99
62 TestHyperKitDriverInstallOrUpdate 8.78
65 TestErrorSpam/setup 36.55
66 TestErrorSpam/start 1.58
67 TestErrorSpam/status 0.51
68 TestErrorSpam/pause 1.35
69 TestErrorSpam/unpause 1.42
70 TestErrorSpam/stop 155.84
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 91.75
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 40.16
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
82 TestFunctional/serial/CacheCmd/cache/add_local 1.35
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.05
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.2
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.46
90 TestFunctional/serial/ExtraConfig 40.69
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.82
93 TestFunctional/serial/LogsFileCmd 2.68
94 TestFunctional/serial/InvalidService 4.19
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 10.27
98 TestFunctional/parallel/DryRun 1.22
99 TestFunctional/parallel/InternationalLanguage 0.5
100 TestFunctional/parallel/StatusCmd 0.5
104 TestFunctional/parallel/ServiceCmdConnect 11.37
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 26.5
108 TestFunctional/parallel/SSHCmd 0.29
109 TestFunctional/parallel/CpCmd 0.93
110 TestFunctional/parallel/MySQL 24.22
111 TestFunctional/parallel/FileSync 0.18
112 TestFunctional/parallel/CertSync 1.11
116 TestFunctional/parallel/NodeLabels 0.06
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.2
120 TestFunctional/parallel/License 0.53
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.13
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
128 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
129 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
130 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
132 TestFunctional/parallel/ServiceCmd/DeployApp 6.11
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
134 TestFunctional/parallel/ProfileCmd/profile_list 0.26
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
136 TestFunctional/parallel/MountCmd/any-port 7.37
137 TestFunctional/parallel/ServiceCmd/List 0.37
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
140 TestFunctional/parallel/ServiceCmd/Format 0.3
141 TestFunctional/parallel/ServiceCmd/URL 0.26
142 TestFunctional/parallel/MountCmd/specific-port 1.4
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
144 TestFunctional/parallel/Version/short 0.1
145 TestFunctional/parallel/Version/components 0.47
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.15
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.09
151 TestFunctional/parallel/ImageCommands/Setup 1.87
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.27
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.76
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.48
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.54
158 TestFunctional/parallel/DockerEnv/bash 0.62
159 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.32
160 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
161 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
162 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 205.23
170 TestMultiControlPlane/serial/DeployApp 4.9
171 TestMultiControlPlane/serial/PingHostFromPods 1.27
172 TestMultiControlPlane/serial/AddWorkerNode 49.54
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
175 TestMultiControlPlane/serial/CopyFile 8.98
176 TestMultiControlPlane/serial/StopSecondaryNode 8.7
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
178 TestMultiControlPlane/serial/RestartSecondaryNode 43.41
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 212.85
181 TestMultiControlPlane/serial/DeleteSecondaryNode 8.08
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.27
183 TestMultiControlPlane/serial/StopCluster 24.98
190 TestImageBuild/serial/Setup 40.45
191 TestImageBuild/serial/NormalBuild 1.61
192 TestImageBuild/serial/BuildWithBuildArg 0.75
193 TestImageBuild/serial/BuildWithDockerIgnore 0.56
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.56
198 TestJSONOutput/start/Command 53.81
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.5
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.46
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.35
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.59
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 88.45
237 TestMultiNode/serial/MultiNodeLabels 0.05
238 TestMultiNode/serial/ProfileList 0.19
243 TestMultiNode/serial/DeleteNode 11.27
244 TestMultiNode/serial/StopMultiNode 16.81
245 TestMultiNode/serial/RestartMultiNode 122.3
246 TestMultiNode/serial/ValidateNameConflict 41.84
250 TestPreload 178.72
253 TestSkaffold 112.63
256 TestRunningBinaryUpgrade 90.12
258 TestKubernetesUpgrade 1334.21
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.47
272 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.04
273 TestStoppedBinaryUpgrade/Setup 1.56
274 TestStoppedBinaryUpgrade/Upgrade 132.59
277 TestStoppedBinaryUpgrade/MinikubeLogs 2.37
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
287 TestNoKubernetes/serial/StartWithK8s 71.58
x
+
TestDownloadOnly/v1.20.0/json-events (19.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-241000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-241000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (19.199140597s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (19.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-241000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-241000: exit status 85 (292.894876ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-241000 | jenkins | v1.33.1 | 06 Aug 24 00:03 PDT |          |
	|         | -p download-only-241000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:03:50
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:03:50.649703    1439 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:03:50.649969    1439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:03:50.649974    1439 out.go:304] Setting ErrFile to fd 2...
	I0806 00:03:50.649978    1439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:03:50.650160    1439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	W0806 00:03:50.650257    1439 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19370-944/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19370-944/.minikube/config/config.json: no such file or directory
	I0806 00:03:50.652015    1439 out.go:298] Setting JSON to true
	I0806 00:03:50.674192    1439 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":192,"bootTime":1722927638,"procs":414,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:03:50.674283    1439 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:03:50.696209    1439 out.go:97] [download-only-241000] minikube v1.33.1 on Darwin 14.5
	I0806 00:03:50.696328    1439 notify.go:220] Checking for updates...
	W0806 00:03:50.696334    1439 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball: no such file or directory
	I0806 00:03:50.716855    1439 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:03:50.738010    1439 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:03:50.760097    1439 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:03:50.781031    1439 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:03:50.801789    1439 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	W0806 00:03:50.843959    1439 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:03:50.844452    1439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:03:50.908845    1439 out.go:97] Using the hyperkit driver based on user configuration
	I0806 00:03:50.908894    1439 start.go:297] selected driver: hyperkit
	I0806 00:03:50.908904    1439 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:03:50.909114    1439 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:03:50.909469    1439 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:03:51.316942    1439 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:03:51.322058    1439 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:03:51.322080    1439 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:03:51.322109    1439 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:03:51.326225    1439 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0806 00:03:51.326381    1439 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:03:51.326441    1439 cni.go:84] Creating CNI manager for ""
	I0806 00:03:51.326459    1439 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0806 00:03:51.326533    1439 start.go:340] cluster config:
	{Name:download-only-241000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-241000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:03:51.326760    1439 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:03:51.347965    1439 out.go:97] Downloading VM boot image ...
	I0806 00:03:51.348067    1439 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19370-944/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0806 00:03:58.089173    1439 out.go:97] Starting "download-only-241000" primary control-plane node in "download-only-241000" cluster
	I0806 00:03:58.089210    1439 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:03:58.145901    1439 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0806 00:03:58.145935    1439 cache.go:56] Caching tarball of preloaded images
	I0806 00:03:58.146288    1439 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0806 00:03:58.166675    1439 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0806 00:03:58.166702    1439 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:03:58.253581    1439 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0806 00:04:05.460521    1439 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:05.460719    1439 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-241000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-241000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-241000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (15.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-339000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-339000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (15.477304578s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (15.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-339000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-339000: exit status 85 (291.16367ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-241000 | jenkins | v1.33.1 | 06 Aug 24 00:03 PDT |                     |
	|         | -p download-only-241000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| delete  | -p download-only-241000        | download-only-241000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| start   | -o=json --download-only        | download-only-339000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-339000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:04:10
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:04:10.605455    1472 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:04:10.605708    1472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:10.605713    1472 out.go:304] Setting ErrFile to fd 2...
	I0806 00:04:10.605716    1472 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:10.605881    1472 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:04:10.607309    1472 out.go:298] Setting JSON to true
	I0806 00:04:10.629371    1472 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":212,"bootTime":1722927638,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:04:10.629455    1472 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:04:10.651490    1472 out.go:97] [download-only-339000] minikube v1.33.1 on Darwin 14.5
	I0806 00:04:10.651726    1472 notify.go:220] Checking for updates...
	I0806 00:04:10.673264    1472 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:04:10.694430    1472 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:04:10.715232    1472 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:04:10.736412    1472 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:04:10.757259    1472 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	W0806 00:04:10.799380    1472 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:04:10.799863    1472 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:04:10.830348    1472 out.go:97] Using the hyperkit driver based on user configuration
	I0806 00:04:10.830451    1472 start.go:297] selected driver: hyperkit
	I0806 00:04:10.830462    1472 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:04:10.830682    1472 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:10.830911    1472 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:04:10.840794    1472 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:04:10.844543    1472 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:04:10.844563    1472 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:04:10.844592    1472 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:04:10.847147    1472 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0806 00:04:10.847327    1472 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:04:10.847350    1472 cni.go:84] Creating CNI manager for ""
	I0806 00:04:10.847366    1472 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:04:10.847373    1472 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:04:10.847434    1472 start.go:340] cluster config:
	{Name:download-only-339000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-339000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:04:10.847525    1472 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:10.868127    1472 out.go:97] Starting "download-only-339000" primary control-plane node in "download-only-339000" cluster
	I0806 00:04:10.868162    1472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:04:10.924548    1472 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:04:10.924604    1472 cache.go:56] Caching tarball of preloaded images
	I0806 00:04:10.925103    1472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:04:10.946897    1472 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0806 00:04:10.946924    1472 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:11.029649    1472 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0806 00:04:21.443817    1472 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:21.443998    1472 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:21.928780    1472 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0806 00:04:21.929014    1472 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/download-only-339000/config.json ...
	I0806 00:04:21.929036    1472 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/download-only-339000/config.json: {Name:mkad1732bdff57e87b2a33b46725c940bb59f092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:04:21.929382    1472 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0806 00:04:21.929641    1472 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19370-944/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-339000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-339000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-339000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (17.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-422000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-422000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=hyperkit : (17.734768055s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (17.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-422000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-422000: exit status 85 (291.412315ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-241000 | jenkins | v1.33.1 | 06 Aug 24 00:03 PDT |                     |
	|         | -p download-only-241000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=hyperkit                 |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| delete  | -p download-only-241000           | download-only-241000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| start   | -o=json --download-only           | download-only-339000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-339000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=hyperkit                 |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| delete  | -p download-only-339000           | download-only-339000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT | 06 Aug 24 00:04 PDT |
	| start   | -o=json --download-only           | download-only-422000 | jenkins | v1.33.1 | 06 Aug 24 00:04 PDT |                     |
	|         | -p download-only-422000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=hyperkit                 |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/06 00:04:26
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0806 00:04:26.815028    1502 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:04:26.815183    1502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:26.815193    1502 out.go:304] Setting ErrFile to fd 2...
	I0806 00:04:26.815197    1502 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:04:26.815384    1502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:04:26.816809    1502 out.go:298] Setting JSON to true
	I0806 00:04:26.839367    1502 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":228,"bootTime":1722927638,"procs":406,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:04:26.839457    1502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:04:26.860834    1502 out.go:97] [download-only-422000] minikube v1.33.1 on Darwin 14.5
	I0806 00:04:26.861054    1502 notify.go:220] Checking for updates...
	I0806 00:04:26.882484    1502 out.go:169] MINIKUBE_LOCATION=19370
	I0806 00:04:26.903691    1502 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:04:26.925000    1502 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:04:26.945783    1502 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:04:26.967059    1502 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	W0806 00:04:27.008574    1502 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0806 00:04:27.009044    1502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:04:27.038759    1502 out.go:97] Using the hyperkit driver based on user configuration
	I0806 00:04:27.038810    1502 start.go:297] selected driver: hyperkit
	I0806 00:04:27.038822    1502 start.go:901] validating driver "hyperkit" against <nil>
	I0806 00:04:27.039040    1502 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:27.039297    1502 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19370-944/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0806 00:04:27.049613    1502 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0806 00:04:27.053932    1502 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:04:27.053951    1502 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0806 00:04:27.053977    1502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0806 00:04:27.056803    1502 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0806 00:04:27.057139    1502 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0806 00:04:27.057163    1502 cni.go:84] Creating CNI manager for ""
	I0806 00:04:27.057178    1502 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0806 00:04:27.057185    1502 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0806 00:04:27.057279    1502 start.go:340] cluster config:
	{Name:download-only-422000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-422000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:04:27.057361    1502 iso.go:125] acquiring lock: {Name:mka9ceffb203a07dd8928fb34e5b66df1a4204ff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0806 00:04:27.078910    1502 out.go:97] Starting "download-only-422000" primary control-plane node in "download-only-422000" cluster
	I0806 00:04:27.078945    1502 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:04:27.133953    1502 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0806 00:04:27.134012    1502 cache.go:56] Caching tarball of preloaded images
	I0806 00:04:27.134364    1502 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:04:27.155856    1502 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0806 00:04:27.155882    1502 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:27.234230    1502 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:214beb6d5aadd59deaf940ce47a22f04 -> /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0806 00:04:34.321024    1502 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:34.321232    1502 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19370-944/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0806 00:04:34.788587    1502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0806 00:04:34.788848    1502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/download-only-422000/config.json ...
	I0806 00:04:34.788869    1502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/download-only-422000/config.json: {Name:mk2bfbda4b7beb0ca5c39cad2e319cda9d441bdf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0806 00:04:34.789200    1502 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0806 00:04:34.789407    1502 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19370-944/.minikube/cache/darwin/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-422000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-422000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-422000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.91s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-774000 --alsologtostderr --binary-mirror http://127.0.0.1:49634 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-774000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-774000
--- PASS: TestBinaryMirror (0.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-331000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-331000: exit status 85 (206.355829ms)

                                                
                                                
-- stdout --
	* Profile "addons-331000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-331000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-331000: exit status 85 (185.655227ms)

                                                
                                                
-- stdout --
	* Profile "addons-331000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-331000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (215.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-331000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-331000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m35.191589251s)
--- PASS: TestAddons/Setup (215.19s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 11.923042ms
addons_test.go:897: volcano-scheduler stabilized in 12.01813ms
addons_test.go:905: volcano-admission stabilized in 12.503912ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-7qzqz" [9099502f-3abe-4f25-86bf-49a0eb742a76] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.004489641s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-g6gvj" [a423bc26-7974-4002-8e0c-9a1bc6a8d75f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003393599s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-k5tjr" [b622ebba-b885-4d2c-8a9f-7226bc67b1f1] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00309122s
addons_test.go:932: (dbg) Run:  kubectl --context addons-331000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-331000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-331000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [b79682d0-b9e8-426f-a84c-c470b6275b38] Pending
helpers_test.go:344: "test-job-nginx-0" [b79682d0-b9e8-426f-a84c-c470b6275b38] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [b79682d0-b9e8-426f-a84c-c470b6275b38] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.005083507s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable volcano --alsologtostderr -v=1: (9.959553134s)
--- PASS: TestAddons/serial/Volcano (39.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-331000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-331000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.896739ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-gncz9" [d67fbb4f-e618-4f33-938a-cd4cc883a271] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006129651s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-fksdh" [6798b70e-47b3-4e78-9f7b-2882bc0f1e95] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.005444895s
addons_test.go:342: (dbg) Run:  kubectl --context addons-331000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-331000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-331000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.308155659s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 ip
2024/08/06 00:09:35 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.96s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-331000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-331000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-331000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2c132ebf-8396-4ecf-bc16-3f0d4b511380] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2c132ebf-8396-4ecf-bc16-3f0d4b511380] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.004591678s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-331000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable ingress-dns --alsologtostderr -v=1: (1.38486509s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable ingress --alsologtostderr -v=1: (7.462755293s)
--- PASS: TestAddons/parallel/Ingress (20.77s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-k8nnt" [2a39be8b-909c-4e01-9b23-2d0804b46847] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003943302s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-331000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-331000: (5.666784508s)
--- PASS: TestAddons/parallel/InspektorGadget (10.67s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.615456ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-pcncq" [20723c8e-78ad-47bb-945b-bae678428e60] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004556684s
addons_test.go:417: (dbg) Run:  kubectl --context addons-331000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.55s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.02s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.469536ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-k9ld6" [822ecb66-c9b8-4cf1-b07f-225953b63249] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003496515s
addons_test.go:475: (dbg) Run:  kubectl --context addons-331000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-331000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.598753941s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.08s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 4.222615ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-331000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-331000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [454b945d-4138-45ef-ac50-6e7231a1a12b] Pending
helpers_test.go:344: "task-pv-pod" [454b945d-4138-45ef-ac50-6e7231a1a12b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [454b945d-4138-45ef-ac50-6e7231a1a12b] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003755032s
addons_test.go:590: (dbg) Run:  kubectl --context addons-331000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-331000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-331000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-331000 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-331000 delete pod task-pv-pod: (1.039837791s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-331000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-331000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-331000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1e25d3f2-1085-44e5-bf54-62efe844eff7] Pending
helpers_test.go:344: "task-pv-pod-restore" [1e25d3f2-1085-44e5-bf54-62efe844eff7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1e25d3f2-1085-44e5-bf54-62efe844eff7] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003835216s
addons_test.go:632: (dbg) Run:  kubectl --context addons-331000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-331000 delete pod task-pv-pod-restore: (1.013519057s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-331000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-331000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.407746246s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.08s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (19.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-331000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-mplkl" [fe479e3c-4d81-47b1-8c18-f4421540c663] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-mplkl" [fe479e3c-4d81-47b1-8c18-f4421540c663] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004507107s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable headlamp --alsologtostderr -v=1: (5.469391s)
--- PASS: TestAddons/parallel/Headlamp (19.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-26679" [5955083f-dadf-4aef-8031-3fb6437c0538] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005232556s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-331000
--- PASS: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (48.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-331000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-331000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2d7f1340-c2f9-471a-b20c-a5d72789690d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2d7f1340-c2f9-471a-b20c-a5d72789690d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2d7f1340-c2f9-471a-b20c-a5d72789690d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004385169s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-331000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 ssh "cat /opt/local-path-provisioner/pvc-11da36e4-412d-451d-95e9-1317cdaf0ef9_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-331000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-331000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (36.836831404s)
--- PASS: TestAddons/parallel/LocalPath (48.48s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rvh7v" [dd1146e9-ff22-472a-845c-d8bdb38ded52] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003033655s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-331000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-xvmv9" [d3afa3a9-1aac-41d2-b1b9-266ef7cea15b] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003900721s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-331000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-331000 addons disable yakd --alsologtostderr -v=1: (5.463625662s)
--- PASS: TestAddons/parallel/Yakd (10.47s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.99s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-331000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-331000: (5.419029672s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-331000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-331000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-331000
--- PASS: TestAddons/StoppedEnableDisable (5.99s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.78s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.78s)

                                                
                                    
x
+
TestErrorSpam/setup (36.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-619000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-619000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 --driver=hyperkit : (36.545476476s)
--- PASS: TestErrorSpam/setup (36.55s)

                                                
                                    
x
+
TestErrorSpam/start (1.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 start --dry-run
--- PASS: TestErrorSpam/start (1.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.51s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 status
--- PASS: TestErrorSpam/status (0.51s)

                                                
                                    
x
+
TestErrorSpam/pause (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 pause
--- PASS: TestErrorSpam/pause (1.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 unpause
--- PASS: TestErrorSpam/unpause (1.42s)

                                                
                                    
x
+
TestErrorSpam/stop (155.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 stop: (5.389584685s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 stop: (1m15.224348993s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 stop
E0806 00:13:22.157935    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.166000    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.176092    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.196212    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.236364    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.318480    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.480619    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:22.802820    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:23.443056    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:24.723365    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:27.283553    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:32.403753    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:13:42.643909    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:14:03.124035    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-619000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-619000 stop: (1m15.225917914s)
--- PASS: TestErrorSpam/stop (155.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19370-944/.minikube/files/etc/test/nested/copy/1437/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-439000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0806 00:14:44.085615    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-439000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m31.747139093s)
--- PASS: TestFunctional/serial/StartWithProxy (91.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (40.16s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-439000 --alsologtostderr -v=8
E0806 00:16:06.006473    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-439000 --alsologtostderr -v=8: (40.164246146s)
functional_test.go:659: soft start took 40.164772126s for "functional-439000" cluster.
--- PASS: TestFunctional/serial/SoftStart (40.16s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-439000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 cache add registry.k8s.io/pause:3.1: (1.155319897s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 cache add registry.k8s.io/pause:3.3: (1.040814212s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2138331122/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cache add minikube-local-cache-test:functional-439000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cache delete minikube-local-cache-test:functional-439000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-439000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (143.983062ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 kubectl -- --context functional-439000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 kubectl -- --context functional-439000 get pods: (1.195011599s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-439000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-439000 get pods: (1.455239769s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.46s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.69s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-439000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-439000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.686945835s)
functional_test.go:757: restart took 40.687047745s for "functional-439000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.69s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-439000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 logs: (2.815926568s)
--- PASS: TestFunctional/serial/LogsCmd (2.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3110889438/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3110889438/001/logs.txt: (2.683486185s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.68s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-439000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-439000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-439000: exit status 115 (266.137346ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:32690 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-439000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 config get cpus: exit status 14 (72.012599ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 config get cpus: exit status 14 (55.136721ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-439000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-439000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2587: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.27s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-439000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-439000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (506.965976ms)

                                                
                                                
-- stdout --
	* [functional-439000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:18:13.271756    2549 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:18:13.271926    2549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:18:13.271932    2549 out.go:304] Setting ErrFile to fd 2...
	I0806 00:18:13.271935    2549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:18:13.272090    2549 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:18:13.273516    2549 out.go:298] Setting JSON to false
	I0806 00:18:13.295942    2549 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1055,"bootTime":1722927638,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:18:13.296031    2549 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:18:13.317691    2549 out.go:177] * [functional-439000] minikube v1.33.1 on Darwin 14.5
	I0806 00:18:13.359754    2549 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:18:13.359822    2549 notify.go:220] Checking for updates...
	I0806 00:18:13.401227    2549 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:18:13.422499    2549 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:18:13.443602    2549 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:18:13.464251    2549 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:18:13.485480    2549 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:18:13.507220    2549 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:18:13.507882    2549 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:18:13.507975    2549 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:18:13.517332    2549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50626
	I0806 00:18:13.517726    2549 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:18:13.518123    2549 main.go:141] libmachine: Using API Version  1
	I0806 00:18:13.518131    2549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:18:13.518373    2549 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:18:13.518484    2549 main.go:141] libmachine: (functional-439000) Calling .DriverName
	I0806 00:18:13.518675    2549 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:18:13.518912    2549 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:18:13.518935    2549 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:18:13.527170    2549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50628
	I0806 00:18:13.527537    2549 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:18:13.527861    2549 main.go:141] libmachine: Using API Version  1
	I0806 00:18:13.527877    2549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:18:13.528110    2549 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:18:13.528246    2549 main.go:141] libmachine: (functional-439000) Calling .DriverName
	I0806 00:18:13.556434    2549 out.go:177] * Using the hyperkit driver based on existing profile
	I0806 00:18:13.598412    2549 start.go:297] selected driver: hyperkit
	I0806 00:18:13.598432    2549 start.go:901] validating driver "hyperkit" against &{Name:functional-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:18:13.598597    2549 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:18:13.641637    2549 out.go:177] 
	W0806 00:18:13.662502    2549 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0806 00:18:13.683153    2549 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-439000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-439000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-439000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (497.708497ms)

                                                
                                                
-- stdout --
	* [functional-439000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:18:12.767129    2542 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:18:12.767287    2542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:18:12.767294    2542 out.go:304] Setting ErrFile to fd 2...
	I0806 00:18:12.767298    2542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:18:12.767480    2542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:18:12.769120    2542 out.go:298] Setting JSON to false
	I0806 00:18:12.792053    2542 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1054,"bootTime":1722927638,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0806 00:18:12.792143    2542 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0806 00:18:12.814480    2542 out.go:177] * [functional-439000] minikube v1.33.1 sur Darwin 14.5
	I0806 00:18:12.872245    2542 notify.go:220] Checking for updates...
	I0806 00:18:12.892950    2542 out.go:177]   - MINIKUBE_LOCATION=19370
	I0806 00:18:12.913738    2542 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	I0806 00:18:12.933962    2542 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0806 00:18:12.954888    2542 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0806 00:18:12.975920    2542 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	I0806 00:18:12.997108    2542 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0806 00:18:13.018739    2542 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:18:13.019354    2542 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:18:13.019434    2542 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:18:13.028934    2542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50621
	I0806 00:18:13.029312    2542 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:18:13.029714    2542 main.go:141] libmachine: Using API Version  1
	I0806 00:18:13.029726    2542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:18:13.029934    2542 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:18:13.030067    2542 main.go:141] libmachine: (functional-439000) Calling .DriverName
	I0806 00:18:13.030260    2542 driver.go:392] Setting default libvirt URI to qemu:///system
	I0806 00:18:13.030500    2542 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:18:13.030523    2542 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:18:13.038687    2542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50623
	I0806 00:18:13.039020    2542 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:18:13.039308    2542 main.go:141] libmachine: Using API Version  1
	I0806 00:18:13.039315    2542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:18:13.039520    2542 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:18:13.039643    2542 main.go:141] libmachine: (functional-439000) Calling .DriverName
	I0806 00:18:13.067890    2542 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0806 00:18:13.109950    2542 start.go:297] selected driver: hyperkit
	I0806 00:18:13.109960    2542 start.go:901] validating driver "hyperkit" against &{Name:functional-439000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-439000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0806 00:18:13.110069    2542 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0806 00:18:13.133885    2542 out.go:177] 
	W0806 00:18:13.155185    2542 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0806 00:18:13.176027    2542 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-439000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-439000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-j2gmv" [70a421a0-1940-44e2-b6f5-adb943871f54] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-j2gmv" [70a421a0-1940-44e2-b6f5-adb943871f54] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004401731s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:31664
functional_test.go:1671: http://192.169.0.4:31664: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-j2gmv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31664
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.37s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [745b8c4c-cd01-4dbb-968d-f4e2afd6530b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003764097s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-439000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-439000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-439000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-439000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5184c168-1d09-4785-a8bb-5eb4dffd5cce] Pending
helpers_test.go:344: "sp-pod" [5184c168-1d09-4785-a8bb-5eb4dffd5cce] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5184c168-1d09-4785-a8bb-5eb4dffd5cce] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00555229s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-439000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-439000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-439000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5bd690da-7f53-4fae-a577-0e6fe845e106] Pending
helpers_test.go:344: "sp-pod" [5bd690da-7f53-4fae-a577-0e6fe845e106] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5bd690da-7f53-4fae-a577-0e6fe845e106] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004593121s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-439000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.50s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh -n functional-439000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cp functional-439000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd2659219944/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh -n functional-439000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh -n functional-439000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-439000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-wjhs9" [eda91d75-cd71-4bc8-9bce-5ac20a9bcb79] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-wjhs9" [eda91d75-cd71-4bc8-9bce-5ac20a9bcb79] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.005444789s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-439000 exec mysql-64454c8b5c-wjhs9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-439000 exec mysql-64454c8b5c-wjhs9 -- mysql -ppassword -e "show databases;": exit status 1 (102.59568ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-439000 exec mysql-64454c8b5c-wjhs9 -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-439000 exec mysql-64454c8b5c-wjhs9 -- mysql -ppassword -e "show databases;": exit status 1 (104.642707ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
E0806 00:18:49.895796    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
functional_test.go:1803: (dbg) Run:  kubectl --context functional-439000 exec mysql-64454c8b5c-wjhs9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1437/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /etc/test/nested/copy/1437/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1437.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /etc/ssl/certs/1437.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1437.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /usr/share/ca-certificates/1437.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/14372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /etc/ssl/certs/14372.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/14372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /usr/share/ca-certificates/14372.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-439000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh "sudo systemctl is-active crio": exit status 1 (195.729004ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-439000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-439000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-439000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-439000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2366: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-439000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-439000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c77db5d7-e723-4928-b6e0-c873ea51ca27] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c77db5d7-e723-4928-b6e0-c873ea51ca27] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.002096005s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-439000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.107.232.23 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-439000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-439000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-439000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-rnwwl" [458aa7c9-c3a5-445c-8405-2a09eabe4f9e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-rnwwl" [458aa7c9-c3a5-445c-8405-2a09eabe4f9e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003713756s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "184.770511ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "75.862435ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "180.423465ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "77.359472ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3563264092/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722928688547966000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3563264092/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722928688547966000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3563264092/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722928688547966000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3563264092/001/test-1722928688547966000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (152.881906ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  6 07:18 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  6 07:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  6 07:18 test-1722928688547966000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh cat /mount-9p/test-1722928688547966000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-439000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [66ef3ff3-ba28-47ab-90f6-8394074f3e8a] Pending
helpers_test.go:344: "busybox-mount" [66ef3ff3-ba28-47ab-90f6-8394074f3e8a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [66ef3ff3-ba28-47ab-90f6-8394074f3e8a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [66ef3ff3-ba28-47ab-90f6-8394074f3e8a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003883927s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-439000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3563264092/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 service list -o json
functional_test.go:1490: Took "372.879368ms" to run "out/minikube-darwin-amd64 -p functional-439000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:31497
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:31497
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3063739437/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.838758ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3063739437/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh "sudo umount -f /mount-9p": exit status 1 (125.080716ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-439000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port3063739437/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3548301201/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3548301201/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3548301201/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T" /mount1: exit status 1 (159.956731ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-439000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3548301201/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3548301201/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-439000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3548301201/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-439000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-439000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-439000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-439000 image ls --format short --alsologtostderr:
I0806 00:18:28.067731    2853 out.go:291] Setting OutFile to fd 1 ...
I0806 00:18:28.068018    2853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.068024    2853 out.go:304] Setting ErrFile to fd 2...
I0806 00:18:28.068028    2853 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.068214    2853 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
I0806 00:18:28.068826    2853 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.068916    2853 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.069232    2853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.069276    2853 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.077355    2853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50952
I0806 00:18:28.077767    2853 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.078180    2853 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.078202    2853 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.078434    2853 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.078553    2853 main.go:141] libmachine: (functional-439000) Calling .GetState
I0806 00:18:28.078642    2853 main.go:141] libmachine: (functional-439000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0806 00:18:28.078718    2853 main.go:141] libmachine: (functional-439000) DBG | hyperkit pid from json: 2105
I0806 00:18:28.080012    2853 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.080034    2853 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.088379    2853 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50954
I0806 00:18:28.088736    2853 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.089090    2853 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.089109    2853 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.089295    2853 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.089400    2853 main.go:141] libmachine: (functional-439000) Calling .DriverName
I0806 00:18:28.089567    2853 ssh_runner.go:195] Run: systemctl --version
I0806 00:18:28.089588    2853 main.go:141] libmachine: (functional-439000) Calling .GetSSHHostname
I0806 00:18:28.089677    2853 main.go:141] libmachine: (functional-439000) Calling .GetSSHPort
I0806 00:18:28.089751    2853 main.go:141] libmachine: (functional-439000) Calling .GetSSHKeyPath
I0806 00:18:28.089833    2853 main.go:141] libmachine: (functional-439000) Calling .GetSSHUsername
I0806 00:18:28.089912    2853 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/functional-439000/id_rsa Username:docker}
I0806 00:18:28.124102    2853 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0806 00:18:28.141261    2853 main.go:141] libmachine: Making call to close driver server
I0806 00:18:28.141271    2853 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:28.141468    2853 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:28.141479    2853 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:28.141490    2853 main.go:141] libmachine: Making call to close driver server
I0806 00:18:28.141497    2853 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:28.141521    2853 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
I0806 00:18:28.141727    2853 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:28.141731    2853 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
I0806 00:18:28.141740    2853 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-439000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-439000 | 9c30402de3867 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/kicbase/echo-server               | functional-439000 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-439000 image ls --format table --alsologtostderr:
I0806 00:18:28.377896    2862 out.go:291] Setting OutFile to fd 1 ...
I0806 00:18:28.378099    2862 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.378110    2862 out.go:304] Setting ErrFile to fd 2...
I0806 00:18:28.378115    2862 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.378303    2862 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
I0806 00:18:28.379000    2862 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.379096    2862 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.379449    2862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.379507    2862 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.388132    2862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50962
I0806 00:18:28.388555    2862 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.388971    2862 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.389001    2862 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.389251    2862 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.389370    2862 main.go:141] libmachine: (functional-439000) Calling .GetState
I0806 00:18:28.389484    2862 main.go:141] libmachine: (functional-439000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0806 00:18:28.389537    2862 main.go:141] libmachine: (functional-439000) DBG | hyperkit pid from json: 2105
I0806 00:18:28.390844    2862 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.390869    2862 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.399328    2862 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50964
I0806 00:18:28.399666    2862 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.400009    2862 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.400026    2862 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.400215    2862 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.400326    2862 main.go:141] libmachine: (functional-439000) Calling .DriverName
I0806 00:18:28.400485    2862 ssh_runner.go:195] Run: systemctl --version
I0806 00:18:28.400509    2862 main.go:141] libmachine: (functional-439000) Calling .GetSSHHostname
I0806 00:18:28.400593    2862 main.go:141] libmachine: (functional-439000) Calling .GetSSHPort
I0806 00:18:28.400668    2862 main.go:141] libmachine: (functional-439000) Calling .GetSSHKeyPath
I0806 00:18:28.400752    2862 main.go:141] libmachine: (functional-439000) Calling .GetSSHUsername
I0806 00:18:28.400838    2862 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/functional-439000/id_rsa Username:docker}
I0806 00:18:28.433895    2862 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0806 00:18:28.450527    2862 main.go:141] libmachine: Making call to close driver server
I0806 00:18:28.450538    2862 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:28.450692    2862 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:28.450692    2862 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
I0806 00:18:28.450703    2862 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:28.450712    2862 main.go:141] libmachine: Making call to close driver server
I0806 00:18:28.450718    2862 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:28.450856    2862 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:28.450865    2862 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:28.450865    2862 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-439000 image ls --format json --alsologtostderr:
[{"id":"9c30402de386742a7c24041ebb9224700a13fbd33712bd8ec8a863eb61248b96","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-439000"],"size":"30"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"cbb01a7bd410dc08ba382018ab909a
674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:lat
est"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-439000"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size"
:"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-439000 image ls --format json --alsologtostderr:
I0806 00:18:28.223222    2858 out.go:291] Setting OutFile to fd 1 ...
I0806 00:18:28.223412    2858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.223422    2858 out.go:304] Setting ErrFile to fd 2...
I0806 00:18:28.223427    2858 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.223605    2858 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
I0806 00:18:28.224189    2858 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.224288    2858 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.224688    2858 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.224726    2858 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.233003    2858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50957
I0806 00:18:28.233412    2858 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.233809    2858 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.233818    2858 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.234040    2858 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.234144    2858 main.go:141] libmachine: (functional-439000) Calling .GetState
I0806 00:18:28.234222    2858 main.go:141] libmachine: (functional-439000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0806 00:18:28.234300    2858 main.go:141] libmachine: (functional-439000) DBG | hyperkit pid from json: 2105
I0806 00:18:28.235551    2858 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.235573    2858 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.243864    2858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50959
I0806 00:18:28.244194    2858 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.244573    2858 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.244595    2858 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.244799    2858 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.244898    2858 main.go:141] libmachine: (functional-439000) Calling .DriverName
I0806 00:18:28.245059    2858 ssh_runner.go:195] Run: systemctl --version
I0806 00:18:28.245080    2858 main.go:141] libmachine: (functional-439000) Calling .GetSSHHostname
I0806 00:18:28.245182    2858 main.go:141] libmachine: (functional-439000) Calling .GetSSHPort
I0806 00:18:28.245368    2858 main.go:141] libmachine: (functional-439000) Calling .GetSSHKeyPath
I0806 00:18:28.245467    2858 main.go:141] libmachine: (functional-439000) Calling .GetSSHUsername
I0806 00:18:28.245556    2858 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/functional-439000/id_rsa Username:docker}
I0806 00:18:28.280238    2858 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0806 00:18:28.296953    2858 main.go:141] libmachine: Making call to close driver server
I0806 00:18:28.296962    2858 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:28.297108    2858 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
I0806 00:18:28.297109    2858 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:28.297123    2858 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:28.297146    2858 main.go:141] libmachine: Making call to close driver server
I0806 00:18:28.297153    2858 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:28.297313    2858 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
I0806 00:18:28.297322    2858 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:28.297337    2858 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-439000 image ls --format yaml --alsologtostderr:
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-439000
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 9c30402de386742a7c24041ebb9224700a13fbd33712bd8ec8a863eb61248b96
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-439000
size: "30"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-439000 image ls --format yaml --alsologtostderr:
I0806 00:18:27.906177    2849 out.go:291] Setting OutFile to fd 1 ...
I0806 00:18:27.906374    2849 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:27.906379    2849 out.go:304] Setting ErrFile to fd 2...
I0806 00:18:27.906383    2849 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:27.906553    2849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
I0806 00:18:27.907134    2849 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:27.907231    2849 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:27.907621    2849 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:27.907668    2849 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:27.915957    2849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50947
I0806 00:18:27.916381    2849 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:27.916783    2849 main.go:141] libmachine: Using API Version  1
I0806 00:18:27.916800    2849 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:27.917002    2849 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:27.917115    2849 main.go:141] libmachine: (functional-439000) Calling .GetState
I0806 00:18:27.917215    2849 main.go:141] libmachine: (functional-439000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0806 00:18:27.917279    2849 main.go:141] libmachine: (functional-439000) DBG | hyperkit pid from json: 2105
I0806 00:18:27.918556    2849 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:27.918577    2849 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:27.926950    2849 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50949
I0806 00:18:27.927306    2849 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:27.927627    2849 main.go:141] libmachine: Using API Version  1
I0806 00:18:27.927638    2849 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:27.927836    2849 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:27.927948    2849 main.go:141] libmachine: (functional-439000) Calling .DriverName
I0806 00:18:27.928102    2849 ssh_runner.go:195] Run: systemctl --version
I0806 00:18:27.928119    2849 main.go:141] libmachine: (functional-439000) Calling .GetSSHHostname
I0806 00:18:27.928189    2849 main.go:141] libmachine: (functional-439000) Calling .GetSSHPort
I0806 00:18:27.928289    2849 main.go:141] libmachine: (functional-439000) Calling .GetSSHKeyPath
I0806 00:18:27.928376    2849 main.go:141] libmachine: (functional-439000) Calling .GetSSHUsername
I0806 00:18:27.928454    2849 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/functional-439000/id_rsa Username:docker}
I0806 00:18:27.969095    2849 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0806 00:18:27.986610    2849 main.go:141] libmachine: Making call to close driver server
I0806 00:18:27.986628    2849 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:27.986775    2849 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:27.986786    2849 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:27.986793    2849 main.go:141] libmachine: Making call to close driver server
I0806 00:18:27.986793    2849 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
I0806 00:18:27.986798    2849 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:27.986908    2849 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:27.986918    2849 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:27.986942    2849 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-439000 ssh pgrep buildkitd: exit status 1 (123.72924ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image build -t localhost/my-image:functional-439000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 image build -t localhost/my-image:functional-439000 testdata/build --alsologtostderr: (2.797987795s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-439000 image build -t localhost/my-image:functional-439000 testdata/build --alsologtostderr:
I0806 00:18:28.653894    2871 out.go:291] Setting OutFile to fd 1 ...
I0806 00:18:28.654263    2871 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.654269    2871 out.go:304] Setting ErrFile to fd 2...
I0806 00:18:28.654273    2871 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0806 00:18:28.654448    2871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
I0806 00:18:28.655058    2871 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.656233    2871 config.go:182] Loaded profile config "functional-439000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0806 00:18:28.656570    2871 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.656608    2871 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.664858    2871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50974
I0806 00:18:28.665252    2871 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.665672    2871 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.665681    2871 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.665942    2871 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.666072    2871 main.go:141] libmachine: (functional-439000) Calling .GetState
I0806 00:18:28.666158    2871 main.go:141] libmachine: (functional-439000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0806 00:18:28.666229    2871 main.go:141] libmachine: (functional-439000) DBG | hyperkit pid from json: 2105
I0806 00:18:28.667525    2871 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0806 00:18:28.667548    2871 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0806 00:18:28.675846    2871 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50976
I0806 00:18:28.676207    2871 main.go:141] libmachine: () Calling .GetVersion
I0806 00:18:28.676554    2871 main.go:141] libmachine: Using API Version  1
I0806 00:18:28.676564    2871 main.go:141] libmachine: () Calling .SetConfigRaw
I0806 00:18:28.676823    2871 main.go:141] libmachine: () Calling .GetMachineName
I0806 00:18:28.676953    2871 main.go:141] libmachine: (functional-439000) Calling .DriverName
I0806 00:18:28.677112    2871 ssh_runner.go:195] Run: systemctl --version
I0806 00:18:28.677137    2871 main.go:141] libmachine: (functional-439000) Calling .GetSSHHostname
I0806 00:18:28.677220    2871 main.go:141] libmachine: (functional-439000) Calling .GetSSHPort
I0806 00:18:28.677343    2871 main.go:141] libmachine: (functional-439000) Calling .GetSSHKeyPath
I0806 00:18:28.677430    2871 main.go:141] libmachine: (functional-439000) Calling .GetSSHUsername
I0806 00:18:28.677530    2871 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/functional-439000/id_rsa Username:docker}
I0806 00:18:28.710965    2871 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3645664774.tar
I0806 00:18:28.711036    2871 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0806 00:18:28.719041    2871 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3645664774.tar
I0806 00:18:28.722499    2871 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3645664774.tar: stat -c "%s %y" /var/lib/minikube/build/build.3645664774.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3645664774.tar': No such file or directory
I0806 00:18:28.722530    2871 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3645664774.tar --> /var/lib/minikube/build/build.3645664774.tar (3072 bytes)
I0806 00:18:28.749521    2871 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3645664774
I0806 00:18:28.757305    2871 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3645664774 -xf /var/lib/minikube/build/build.3645664774.tar
I0806 00:18:28.764947    2871 docker.go:360] Building image: /var/lib/minikube/build/build.3645664774
I0806 00:18:28.765014    2871 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-439000 /var/lib/minikube/build/build.3645664774
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.4s
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.6s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:919a19fc0e4d9af0ed18254b9fd35fcfc06e94c826104a438e548e44f058889b done
#8 naming to localhost/my-image:functional-439000 done
#8 DONE 0.0s
I0806 00:18:31.349155    2871 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-439000 /var/lib/minikube/build/build.3645664774: (2.584118679s)
I0806 00:18:31.349216    2871 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3645664774
I0806 00:18:31.358418    2871 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3645664774.tar
I0806 00:18:31.368716    2871 build_images.go:217] Built localhost/my-image:functional-439000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3645664774.tar
I0806 00:18:31.368741    2871 build_images.go:133] succeeded building to: functional-439000
I0806 00:18:31.368746    2871 build_images.go:134] failed building to: 
I0806 00:18:31.368762    2871 main.go:141] libmachine: Making call to close driver server
I0806 00:18:31.368769    2871 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:31.368932    2871 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:31.368942    2871 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:31.368948    2871 main.go:141] libmachine: Making call to close driver server
I0806 00:18:31.368953    2871 main.go:141] libmachine: (functional-439000) Calling .Close
I0806 00:18:31.369110    2871 main.go:141] libmachine: Successfully made call to close driver server
I0806 00:18:31.369123    2871 main.go:141] libmachine: Making call to close connection to plugin binary
I0806 00:18:31.369133    2871 main.go:141] libmachine: (functional-439000) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.840409418s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-439000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image load --daemon docker.io/kicbase/echo-server:functional-439000 --alsologtostderr
E0806 00:18:22.205059    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-439000 image load --daemon docker.io/kicbase/echo-server:functional-439000 --alsologtostderr: (1.076808412s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image load --daemon docker.io/kicbase/echo-server:functional-439000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-439000
2024/08/06 00:18:24 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image load --daemon docker.io/kicbase/echo-server:functional-439000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image save docker.io/kicbase/echo-server:functional-439000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image rm docker.io/kicbase/echo-server:functional-439000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-439000 docker-env) && out/minikube-darwin-amd64 status -p functional-439000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-439000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-439000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 image save --daemon docker.io/kicbase/echo-server:functional-439000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-439000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-439000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-439000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-439000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-439000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (205.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-772000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-772000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m24.860589456s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (205.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-772000 -- rollout status deployment/busybox: (2.617621873s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-dmkrt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-ljs4p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-rq4zv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-dmkrt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-ljs4p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-rq4zv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-dmkrt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-ljs4p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-rq4zv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-dmkrt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-dmkrt -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-ljs4p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-ljs4p -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-rq4zv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-772000 -- exec busybox-fc5497c4f-rq4zv -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-772000 -v=7 --alsologtostderr
E0806 00:22:41.268803    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.274101    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.284917    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.305280    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.346328    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.426822    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.588263    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:41.909255    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:42.549965    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:43.831387    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:46.393145    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:22:51.514209    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:23:01.755518    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-772000 -v=7 --alsologtostderr: (49.101116951s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-772000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (8.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp testdata/cp-test.txt ha-772000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3124255688/001/cp-test_ha-772000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000:/home/docker/cp-test.txt ha-772000-m02:/home/docker/cp-test_ha-772000_ha-772000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test_ha-772000_ha-772000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000:/home/docker/cp-test.txt ha-772000-m03:/home/docker/cp-test_ha-772000_ha-772000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test_ha-772000_ha-772000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000:/home/docker/cp-test.txt ha-772000-m04:/home/docker/cp-test_ha-772000_ha-772000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test_ha-772000_ha-772000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp testdata/cp-test.txt ha-772000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3124255688/001/cp-test_ha-772000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m02:/home/docker/cp-test.txt ha-772000:/home/docker/cp-test_ha-772000-m02_ha-772000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test_ha-772000-m02_ha-772000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m02:/home/docker/cp-test.txt ha-772000-m03:/home/docker/cp-test_ha-772000-m02_ha-772000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test_ha-772000-m02_ha-772000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m02:/home/docker/cp-test.txt ha-772000-m04:/home/docker/cp-test_ha-772000-m02_ha-772000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test_ha-772000-m02_ha-772000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp testdata/cp-test.txt ha-772000-m03:/home/docker/cp-test.txt
E0806 00:23:22.206872    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 00:23:22.236457    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3124255688/001/cp-test_ha-772000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m03:/home/docker/cp-test.txt ha-772000:/home/docker/cp-test_ha-772000-m03_ha-772000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test_ha-772000-m03_ha-772000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m03:/home/docker/cp-test.txt ha-772000-m02:/home/docker/cp-test_ha-772000-m03_ha-772000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test_ha-772000-m03_ha-772000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m03:/home/docker/cp-test.txt ha-772000-m04:/home/docker/cp-test_ha-772000-m03_ha-772000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test_ha-772000-m03_ha-772000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp testdata/cp-test.txt ha-772000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3124255688/001/cp-test_ha-772000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m04:/home/docker/cp-test.txt ha-772000:/home/docker/cp-test_ha-772000-m04_ha-772000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000 "sudo cat /home/docker/cp-test_ha-772000-m04_ha-772000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m04:/home/docker/cp-test.txt ha-772000-m02:/home/docker/cp-test_ha-772000-m04_ha-772000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m02 "sudo cat /home/docker/cp-test_ha-772000-m04_ha-772000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 cp ha-772000-m04:/home/docker/cp-test.txt ha-772000-m03:/home/docker/cp-test_ha-772000-m04_ha-772000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 ssh -n ha-772000-m03 "sudo cat /home/docker/cp-test_ha-772000-m04_ha-772000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (8.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-772000 node stop m02 -v=7 --alsologtostderr: (8.348039401s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (347.019546ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-772000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-772000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-772000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:23:34.794568    3394 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:23:34.794855    3394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:23:34.794861    3394 out.go:304] Setting ErrFile to fd 2...
	I0806 00:23:34.794865    3394 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:23:34.795059    3394 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:23:34.795253    3394 out.go:298] Setting JSON to false
	I0806 00:23:34.795276    3394 mustload.go:65] Loading cluster: ha-772000
	I0806 00:23:34.795309    3394 notify.go:220] Checking for updates...
	I0806 00:23:34.795582    3394 config.go:182] Loaded profile config "ha-772000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:23:34.795597    3394 status.go:255] checking status of ha-772000 ...
	I0806 00:23:34.795965    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.796024    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.804870    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51715
	I0806 00:23:34.805263    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.805670    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.805679    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.805881    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.805978    3394 main.go:141] libmachine: (ha-772000) Calling .GetState
	I0806 00:23:34.806071    3394 main.go:141] libmachine: (ha-772000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:23:34.806147    3394 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid from json: 2909
	I0806 00:23:34.807149    3394 status.go:330] ha-772000 host status = "Running" (err=<nil>)
	I0806 00:23:34.807170    3394 host.go:66] Checking if "ha-772000" exists ...
	I0806 00:23:34.807431    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.807452    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.815833    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51717
	I0806 00:23:34.816214    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.816539    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.816548    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.816773    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.816884    3394 main.go:141] libmachine: (ha-772000) Calling .GetIP
	I0806 00:23:34.816967    3394 host.go:66] Checking if "ha-772000" exists ...
	I0806 00:23:34.817228    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.817257    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.825594    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51719
	I0806 00:23:34.825906    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.826216    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.826226    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.826443    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.826549    3394 main.go:141] libmachine: (ha-772000) Calling .DriverName
	I0806 00:23:34.826703    3394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:23:34.826722    3394 main.go:141] libmachine: (ha-772000) Calling .GetSSHHostname
	I0806 00:23:34.826801    3394 main.go:141] libmachine: (ha-772000) Calling .GetSSHPort
	I0806 00:23:34.826882    3394 main.go:141] libmachine: (ha-772000) Calling .GetSSHKeyPath
	I0806 00:23:34.826957    3394 main.go:141] libmachine: (ha-772000) Calling .GetSSHUsername
	I0806 00:23:34.827039    3394 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000/id_rsa Username:docker}
	I0806 00:23:34.862345    3394 ssh_runner.go:195] Run: systemctl --version
	I0806 00:23:34.866602    3394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:23:34.877337    3394 kubeconfig.go:125] found "ha-772000" server: "https://192.169.0.254:8443"
	I0806 00:23:34.877366    3394 api_server.go:166] Checking apiserver status ...
	I0806 00:23:34.877413    3394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:23:34.889976    3394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2032/cgroup
	W0806 00:23:34.897285    3394 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2032/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:23:34.897326    3394 ssh_runner.go:195] Run: ls
	I0806 00:23:34.900635    3394 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0806 00:23:34.905227    3394 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0806 00:23:34.905239    3394 status.go:422] ha-772000 apiserver status = Running (err=<nil>)
	I0806 00:23:34.905249    3394 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:23:34.905261    3394 status.go:255] checking status of ha-772000-m02 ...
	I0806 00:23:34.905532    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.905552    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.914123    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51723
	I0806 00:23:34.914454    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.914789    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.914797    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.914991    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.915134    3394 main.go:141] libmachine: (ha-772000-m02) Calling .GetState
	I0806 00:23:34.915226    3394 main.go:141] libmachine: (ha-772000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:23:34.915303    3394 main.go:141] libmachine: (ha-772000-m02) DBG | hyperkit pid from json: 2928
	I0806 00:23:34.916291    3394 main.go:141] libmachine: (ha-772000-m02) DBG | hyperkit pid 2928 missing from process table
	I0806 00:23:34.916331    3394 status.go:330] ha-772000-m02 host status = "Stopped" (err=<nil>)
	I0806 00:23:34.916338    3394 status.go:343] host is not running, skipping remaining checks
	I0806 00:23:34.916351    3394 status.go:257] ha-772000-m02 status: &{Name:ha-772000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:23:34.916366    3394 status.go:255] checking status of ha-772000-m03 ...
	I0806 00:23:34.916626    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.916652    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.925361    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51725
	I0806 00:23:34.925687    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.926009    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.926023    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.926257    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.926372    3394 main.go:141] libmachine: (ha-772000-m03) Calling .GetState
	I0806 00:23:34.926481    3394 main.go:141] libmachine: (ha-772000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:23:34.926556    3394 main.go:141] libmachine: (ha-772000-m03) DBG | hyperkit pid from json: 2955
	I0806 00:23:34.927546    3394 status.go:330] ha-772000-m03 host status = "Running" (err=<nil>)
	I0806 00:23:34.927554    3394 host.go:66] Checking if "ha-772000-m03" exists ...
	I0806 00:23:34.927800    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.927828    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.936334    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51727
	I0806 00:23:34.936688    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.937021    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.937038    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.937245    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.937360    3394 main.go:141] libmachine: (ha-772000-m03) Calling .GetIP
	I0806 00:23:34.937444    3394 host.go:66] Checking if "ha-772000-m03" exists ...
	I0806 00:23:34.937688    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:34.937711    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:34.946204    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51729
	I0806 00:23:34.946554    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:34.946872    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:34.946887    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:34.947097    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:34.947209    3394 main.go:141] libmachine: (ha-772000-m03) Calling .DriverName
	I0806 00:23:34.947331    3394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:23:34.947341    3394 main.go:141] libmachine: (ha-772000-m03) Calling .GetSSHHostname
	I0806 00:23:34.947434    3394 main.go:141] libmachine: (ha-772000-m03) Calling .GetSSHPort
	I0806 00:23:34.947514    3394 main.go:141] libmachine: (ha-772000-m03) Calling .GetSSHKeyPath
	I0806 00:23:34.947608    3394 main.go:141] libmachine: (ha-772000-m03) Calling .GetSSHUsername
	I0806 00:23:34.947684    3394 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000-m03/id_rsa Username:docker}
	I0806 00:23:34.979431    3394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:23:34.990873    3394 kubeconfig.go:125] found "ha-772000" server: "https://192.169.0.254:8443"
	I0806 00:23:34.990887    3394 api_server.go:166] Checking apiserver status ...
	I0806 00:23:34.990924    3394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0806 00:23:35.001676    3394 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1967/cgroup
	W0806 00:23:35.008957    3394 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1967/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0806 00:23:35.009003    3394 ssh_runner.go:195] Run: ls
	I0806 00:23:35.012557    3394 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0806 00:23:35.017339    3394 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0806 00:23:35.017355    3394 status.go:422] ha-772000-m03 apiserver status = Running (err=<nil>)
	I0806 00:23:35.017364    3394 status.go:257] ha-772000-m03 status: &{Name:ha-772000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:23:35.017377    3394 status.go:255] checking status of ha-772000-m04 ...
	I0806 00:23:35.017649    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:35.017674    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:35.026353    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51733
	I0806 00:23:35.026720    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:35.027058    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:35.027075    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:35.027255    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:35.027371    3394 main.go:141] libmachine: (ha-772000-m04) Calling .GetState
	I0806 00:23:35.027463    3394 main.go:141] libmachine: (ha-772000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:23:35.027547    3394 main.go:141] libmachine: (ha-772000-m04) DBG | hyperkit pid from json: 3064
	I0806 00:23:35.028532    3394 status.go:330] ha-772000-m04 host status = "Running" (err=<nil>)
	I0806 00:23:35.028542    3394 host.go:66] Checking if "ha-772000-m04" exists ...
	I0806 00:23:35.028800    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:35.028827    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:35.037313    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51735
	I0806 00:23:35.037676    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:35.038028    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:35.038043    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:35.038271    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:35.038388    3394 main.go:141] libmachine: (ha-772000-m04) Calling .GetIP
	I0806 00:23:35.038470    3394 host.go:66] Checking if "ha-772000-m04" exists ...
	I0806 00:23:35.038737    3394 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:23:35.038758    3394 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:23:35.047223    3394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51737
	I0806 00:23:35.047581    3394 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:23:35.047959    3394 main.go:141] libmachine: Using API Version  1
	I0806 00:23:35.047977    3394 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:23:35.048168    3394 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:23:35.048304    3394 main.go:141] libmachine: (ha-772000-m04) Calling .DriverName
	I0806 00:23:35.048446    3394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0806 00:23:35.048458    3394 main.go:141] libmachine: (ha-772000-m04) Calling .GetSSHHostname
	I0806 00:23:35.048543    3394 main.go:141] libmachine: (ha-772000-m04) Calling .GetSSHPort
	I0806 00:23:35.048646    3394 main.go:141] libmachine: (ha-772000-m04) Calling .GetSSHKeyPath
	I0806 00:23:35.048738    3394 main.go:141] libmachine: (ha-772000-m04) Calling .GetSSHUsername
	I0806 00:23:35.048824    3394 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19370-944/.minikube/machines/ha-772000-m04/id_rsa Username:docker}
	I0806 00:23:35.076730    3394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0806 00:23:35.086864    3394 status.go:257] ha-772000-m04 status: &{Name:ha-772000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 node start m02 -v=7 --alsologtostderr
E0806 00:24:03.197374    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-772000 node start m02 -v=7 --alsologtostderr: (42.921347807s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (43.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (212.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-772000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-772000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-772000 -v=7 --alsologtostderr: (27.119985513s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-772000 --wait=true -v=7 --alsologtostderr
E0806 00:25:25.119809    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:27:41.269872    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-772000 --wait=true -v=7 --alsologtostderr: (3m5.621321892s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-772000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (212.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-772000 node delete m03 -v=7 --alsologtostderr: (7.608203381s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 stop -v=7 --alsologtostderr
E0806 00:28:08.960965    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 00:28:22.206444    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-772000 stop -v=7 --alsologtostderr: (24.89503652s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (89.533844ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-772000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-772000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:28:25.248477    3604 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:28:25.248688    3604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:28:25.248693    3604 out.go:304] Setting ErrFile to fd 2...
	I0806 00:28:25.248697    3604 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:28:25.248868    3604 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:28:25.249050    3604 out.go:298] Setting JSON to false
	I0806 00:28:25.249076    3604 mustload.go:65] Loading cluster: ha-772000
	I0806 00:28:25.249113    3604 notify.go:220] Checking for updates...
	I0806 00:28:25.249390    3604 config.go:182] Loaded profile config "ha-772000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:28:25.249407    3604 status.go:255] checking status of ha-772000 ...
	I0806 00:28:25.249764    3604 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.249821    3604 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:28:25.258483    3604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52043
	I0806 00:28:25.258806    3604 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:28:25.259287    3604 main.go:141] libmachine: Using API Version  1
	I0806 00:28:25.259301    3604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:28:25.259536    3604 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:28:25.259660    3604 main.go:141] libmachine: (ha-772000) Calling .GetState
	I0806 00:28:25.259778    3604 main.go:141] libmachine: (ha-772000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:28:25.259817    3604 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid from json: 3478
	I0806 00:28:25.260731    3604 main.go:141] libmachine: (ha-772000) DBG | hyperkit pid 3478 missing from process table
	I0806 00:28:25.260761    3604 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0806 00:28:25.260768    3604 status.go:343] host is not running, skipping remaining checks
	I0806 00:28:25.260775    3604 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:28:25.260794    3604 status.go:255] checking status of ha-772000-m02 ...
	I0806 00:28:25.261049    3604 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.261070    3604 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:28:25.269355    3604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52045
	I0806 00:28:25.269697    3604 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:28:25.270049    3604 main.go:141] libmachine: Using API Version  1
	I0806 00:28:25.270069    3604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:28:25.270294    3604 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:28:25.270410    3604 main.go:141] libmachine: (ha-772000-m02) Calling .GetState
	I0806 00:28:25.270494    3604 main.go:141] libmachine: (ha-772000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:28:25.270566    3604 main.go:141] libmachine: (ha-772000-m02) DBG | hyperkit pid from json: 3489
	I0806 00:28:25.271445    3604 main.go:141] libmachine: (ha-772000-m02) DBG | hyperkit pid 3489 missing from process table
	I0806 00:28:25.271493    3604 status.go:330] ha-772000-m02 host status = "Stopped" (err=<nil>)
	I0806 00:28:25.271503    3604 status.go:343] host is not running, skipping remaining checks
	I0806 00:28:25.271510    3604 status.go:257] ha-772000-m02 status: &{Name:ha-772000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:28:25.271521    3604 status.go:255] checking status of ha-772000-m04 ...
	I0806 00:28:25.271777    3604 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:28:25.271817    3604 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:28:25.280299    3604 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52047
	I0806 00:28:25.280624    3604 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:28:25.280938    3604 main.go:141] libmachine: Using API Version  1
	I0806 00:28:25.280947    3604 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:28:25.281151    3604 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:28:25.281251    3604 main.go:141] libmachine: (ha-772000-m04) Calling .GetState
	I0806 00:28:25.281325    3604 main.go:141] libmachine: (ha-772000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:28:25.281405    3604 main.go:141] libmachine: (ha-772000-m04) DBG | hyperkit pid from json: 3522
	I0806 00:28:25.282295    3604 main.go:141] libmachine: (ha-772000-m04) DBG | hyperkit pid 3522 missing from process table
	I0806 00:28:25.282321    3604 status.go:330] ha-772000-m04 host status = "Stopped" (err=<nil>)
	I0806 00:28:25.282329    3604 status.go:343] host is not running, skipping remaining checks
	I0806 00:28:25.282335    3604 status.go:257] ha-772000-m04 status: &{Name:ha-772000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-036000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-036000 --driver=hyperkit : (40.450135698s)
--- PASS: TestImageBuild/serial/Setup (40.45s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-036000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-036000: (1.613629163s)
--- PASS: TestImageBuild/serial/NormalBuild (1.61s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-036000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.75s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-036000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.56s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.56s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-036000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-960000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-960000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (53.807913891s)
--- PASS: TestJSONOutput/start/Command (53.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-960000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-960000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-960000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-960000 --output=json --user=testUser: (8.345485873s)
--- PASS: TestJSONOutput/stop/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.59s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-140000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-140000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (380.848051ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cdb81b43-3469-4d18-b4cc-d7b06a3be0b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-140000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6860288a-c629-4f2c-89d8-abeb64ab9e22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19370"}}
	{"specversion":"1.0","id":"54557f4b-281f-4882-892b-3c7b3662d9d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig"}}
	{"specversion":"1.0","id":"47afe5db-c44b-4013-8b09-6706c243f83f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"55249397-f746-4062-a4b7-caa104a62501","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"36c8491b-31fe-4ef5-b528-dc2f9a766899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube"}}
	{"specversion":"1.0","id":"d965d8bd-8ba4-48ec-b7ac-6018fea1dabb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ff9555cc-903f-4700-943d-5a1f2cab1b0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-140000
--- PASS: TestErrorJSONOutput (0.59s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (88.45s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-500000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-500000 --driver=hyperkit : (38.580609362s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-502000 --driver=hyperkit 
E0806 00:32:41.385150    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-502000 --driver=hyperkit : (38.50753757s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-500000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-502000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-502000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-502000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-502000: (5.310679041s)
helpers_test.go:175: Cleaning up "first-500000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-500000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-500000: (5.274200542s)
--- PASS: TestMinikubeProfile (88.45s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-100000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (11.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 node delete m03: (10.926585849s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (11.27s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-100000 stop: (16.651402171s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status: exit status 7 (80.595188ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-100000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr: exit status 7 (77.925309ms)

                                                
                                                
-- stdout --
	multinode-100000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-100000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0806 00:59:48.133157    5654 out.go:291] Setting OutFile to fd 1 ...
	I0806 00:59:48.133840    5654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:59:48.133849    5654 out.go:304] Setting ErrFile to fd 2...
	I0806 00:59:48.133855    5654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0806 00:59:48.134358    5654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19370-944/.minikube/bin
	I0806 00:59:48.134543    5654 out.go:298] Setting JSON to false
	I0806 00:59:48.134566    5654 mustload.go:65] Loading cluster: multinode-100000
	I0806 00:59:48.134593    5654 notify.go:220] Checking for updates...
	I0806 00:59:48.134844    5654 config.go:182] Loaded profile config "multinode-100000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0806 00:59:48.134860    5654 status.go:255] checking status of multinode-100000 ...
	I0806 00:59:48.135198    5654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:59:48.135247    5654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:59:48.144128    5654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53228
	I0806 00:59:48.144505    5654 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:59:48.144918    5654 main.go:141] libmachine: Using API Version  1
	I0806 00:59:48.144927    5654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:59:48.145143    5654 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:59:48.145272    5654 main.go:141] libmachine: (multinode-100000) Calling .GetState
	I0806 00:59:48.145359    5654 main.go:141] libmachine: (multinode-100000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:59:48.145429    5654 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid from json: 5446
	I0806 00:59:48.146316    5654 main.go:141] libmachine: (multinode-100000) DBG | hyperkit pid 5446 missing from process table
	I0806 00:59:48.146346    5654 status.go:330] multinode-100000 host status = "Stopped" (err=<nil>)
	I0806 00:59:48.146353    5654 status.go:343] host is not running, skipping remaining checks
	I0806 00:59:48.146360    5654 status.go:257] multinode-100000 status: &{Name:multinode-100000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0806 00:59:48.146382    5654 status.go:255] checking status of multinode-100000-m02 ...
	I0806 00:59:48.146627    5654 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0806 00:59:48.146645    5654 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0806 00:59:48.154946    5654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53230
	I0806 00:59:48.155295    5654 main.go:141] libmachine: () Calling .GetVersion
	I0806 00:59:48.155652    5654 main.go:141] libmachine: Using API Version  1
	I0806 00:59:48.155664    5654 main.go:141] libmachine: () Calling .SetConfigRaw
	I0806 00:59:48.155861    5654 main.go:141] libmachine: () Calling .GetMachineName
	I0806 00:59:48.155976    5654 main.go:141] libmachine: (multinode-100000-m02) Calling .GetState
	I0806 00:59:48.156078    5654 main.go:141] libmachine: (multinode-100000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0806 00:59:48.156133    5654 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid from json: 5480
	I0806 00:59:48.157037    5654 main.go:141] libmachine: (multinode-100000-m02) DBG | hyperkit pid 5480 missing from process table
	I0806 00:59:48.157063    5654 status.go:330] multinode-100000-m02 host status = "Stopped" (err=<nil>)
	I0806 00:59:48.157069    5654 status.go:343] host is not running, skipping remaining checks
	I0806 00:59:48.157075    5654 status.go:257] multinode-100000-m02 status: &{Name:multinode-100000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (122.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-100000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (2m1.968142959s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-100000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (122.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (41.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-100000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-100000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-100000-m02 --driver=hyperkit : exit status 14 (430.977127ms)

                                                
                                                
-- stdout --
	* [multinode-100000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-100000-m02' is duplicated with machine name 'multinode-100000-m02' in profile 'multinode-100000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-100000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-100000-m03 --driver=hyperkit : (37.565710739s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-100000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-100000: exit status 80 (297.467826ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-100000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-100000-m03 already exists in multinode-100000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-100000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-100000-m03: (3.491844444s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (41.84s)

                                                
                                    
x
+
TestPreload (178.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0806 01:03:05.476939    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:03:22.422283    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-161000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m53.048236977s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-161000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-161000 image pull gcr.io/k8s-minikube/busybox: (1.337349914s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-161000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-161000: (8.382895454s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-161000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-161000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (50.552224224s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-161000 image list
helpers_test.go:175: Cleaning up "test-preload-161000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-161000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-161000: (5.24240333s)
--- PASS: TestPreload (178.72s)

                                                
                                    
x
+
TestSkaffold (112.63s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3626928107 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3626928107 version: (1.759374383s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-699000 --memory=2600 --driver=hyperkit 
E0806 01:08:22.427852    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-699000 --memory=2600 --driver=hyperkit : (37.521492744s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3626928107 run --minikube-profile skaffold-699000 --kube-context skaffold-699000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3626928107 run --minikube-profile skaffold-699000 --kube-context skaffold-699000 --status-check=true --port-forward=false --interactive=false: (55.707375696s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6b485d6dfb-hjzv2" [daeb3864-373f-4dc7-a482-2570e353184a] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003199551s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-8d98c8988-dbfgr" [e9e2a51b-7694-401e-8f3b-3491f4487ebc] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003954258s
helpers_test.go:175: Cleaning up "skaffold-699000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-699000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-699000: (5.240591991s)
--- PASS: TestSkaffold (112.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.12s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.4107846470 start -p running-upgrade-690000 --memory=2200 --vm-driver=hyperkit 
E0806 01:23:22.442653    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.4107846470 start -p running-upgrade-690000 --memory=2200 --vm-driver=hyperkit : (57.172048864s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-690000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-690000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (25.649314984s)
helpers_test.go:175: Cleaning up "running-upgrade-690000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-690000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-690000: (5.236710988s)
--- PASS: TestRunningBinaryUpgrade (90.12s)

                                                
                                    
x
+
TestKubernetesUpgrade (1334.21s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (51.553399479s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-346000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-346000: (2.366997293s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-346000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-346000 status --format={{.Host}}: exit status 7 (68.748954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit 
E0806 01:27:41.551123    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:28:22.487811    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:29:04.608116    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:29:43.598580    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:31:06.654822    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:32:41.554806    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:33:22.493983    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:34:43.603211    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit : (10m39.842295379s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-346000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (770.195891ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-346000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-346000
	    minikube start -p kubernetes-upgrade-346000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3460002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-346000 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit 
E0806 01:36:25.553368    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:37:41.559168    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:38:22.497939    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:39:43.559106    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:42:41.512738    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:43:22.449568    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
E0806 01:44:43.556487    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
E0806 01:45:44.568235    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-346000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit : (10m34.277698113s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-346000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-346000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-346000: (5.281826283s)
--- PASS: TestKubernetesUpgrade (1334.21s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.47s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19370
- KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1261152926/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1261152926/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1261152926/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1261152926/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.47s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.04s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19370
- KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3727700054/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3727700054/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3727700054/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3727700054/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (132.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1380277538 start -p stopped-upgrade-234000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1380277538 start -p stopped-upgrade-234000 --memory=2200 --vm-driver=hyperkit : (41.283702392s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1380277538 -p stopped-upgrade-234000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1380277538 -p stopped-upgrade-234000 stop: (8.232856551s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-234000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0806 01:47:41.512155    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/functional-439000/client.crt: no such file or directory
E0806 01:47:46.613071    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/skaffold-699000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-234000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m23.068520114s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (132.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.37s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-234000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-234000: (2.368959542s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-883000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (465.902661ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-883000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19370
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19370-944/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19370-944/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (71.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-883000 --driver=hyperkit 
E0806 01:48:22.449629    1437 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19370-944/.minikube/profiles/addons-331000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-883000 --driver=hyperkit : (1m11.423711505s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-883000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (71.58s)

                                                
                                    

Test skip (20/222)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard